Jan 21 15:31:56 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 15:31:56 crc restorecon[4748]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:56 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:31:57 crc restorecon[4748]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 15:31:57 crc kubenswrapper[4890]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:31:57 crc kubenswrapper[4890]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 15:31:57 crc kubenswrapper[4890]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:31:57 crc kubenswrapper[4890]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:31:57 crc kubenswrapper[4890]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 15:31:57 crc kubenswrapper[4890]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.753371 4890 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.756995 4890 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757018 4890 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757024 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757030 4890 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757036 4890 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757041 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757047 4890 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757053 4890 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757058 4890 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757063 4890 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757068 4890 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757073 4890 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757079 4890 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757084 4890 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757089 4890 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757094 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757099 4890 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757105 4890 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757112 4890 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757119 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757124 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757129 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757135 4890 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757140 4890 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757145 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757150 4890 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757155 4890 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757161 4890 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757165 4890 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757171 4890 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757176 4890 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757183 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757188 4890 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757193 4890 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757198 4890 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757203 4890 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757208 4890 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757212 4890 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757218 4890 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757222 4890 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757228 4890 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757236 4890 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757242 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757249 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757254 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757260 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757266 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757272 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757278 4890 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757285 4890 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757290 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757295 4890 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757300 4890 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757306 4890 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757312 4890 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757318 4890 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757324 4890 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757329 4890 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757334 4890 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757340 4890 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757345 4890 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757369 4890 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757375 4890 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757381 4890 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757386 4890 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757393 4890 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757399 4890 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757405 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757410 4890 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757417 4890 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.757423 4890 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757541 4890 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757557 4890 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757569 4890 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757579 4890 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757588 4890 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757595 4890 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757604 4890 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757613 4890 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757620 4890 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757627 4890 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757635 4890 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757642 4890 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757649 4890 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757657 4890 flags.go:64] FLAG: --cgroup-root="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757664 4890 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757672 4890 flags.go:64] FLAG: --client-ca-file="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757679 4890 flags.go:64] FLAG: --cloud-config="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757685 4890 flags.go:64] FLAG: --cloud-provider="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757692 4890 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757705 4890 flags.go:64] FLAG: --cluster-domain="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757714 4890 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757722 4890 flags.go:64] FLAG: --config-dir="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757729 4890 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757737 4890 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757747 4890 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757754 4890 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757761 4890 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757770 4890 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757778 4890 flags.go:64] FLAG: --contention-profiling="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757786 4890 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757794 4890 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757802 4890 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757810 4890 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757821 4890 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757828 4890 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757836 4890 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757843 4890 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757851 4890 flags.go:64] FLAG: --enable-server="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757858 4890 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757868 4890 flags.go:64] FLAG: --event-burst="100" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757876 4890 flags.go:64] FLAG: --event-qps="50" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757884 4890 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757891 4890 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757901 4890 flags.go:64] FLAG: --eviction-hard="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757912 4890 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757920 4890 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757929 4890 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757937 4890 flags.go:64] FLAG: --eviction-soft="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757945 4890 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757952 4890 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757961 4890 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757970 4890 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757978 4890 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757986 4890 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.757994 4890 flags.go:64] FLAG: --feature-gates="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758013 4890 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758022 4890 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758030 4890 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758038 4890 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758046 4890 flags.go:64] FLAG: --healthz-port="10248" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758053 4890 flags.go:64] FLAG: --help="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758061 4890 flags.go:64] FLAG: --hostname-override="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758068 4890 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758075 4890 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758083 4890 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758091 4890 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758098 4890 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758106 4890 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758113 4890 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758120 4890 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758127 4890 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758135 4890 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758143 4890 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758150 4890 flags.go:64] FLAG: --kube-reserved="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758157 4890 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758164 4890 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758172 4890 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758179 4890 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758187 4890 flags.go:64] FLAG: --lock-file="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758195 4890 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758203 4890 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758210 4890 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758222 4890 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758229 4890 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758237 4890 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758244 4890 flags.go:64] FLAG: --logging-format="text" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758251 4890 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758259 4890 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758266 4890 flags.go:64] FLAG: --manifest-url="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758273 4890 flags.go:64] FLAG: --manifest-url-header="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758283 4890 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758291 4890 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758300 4890 flags.go:64] FLAG: --max-pods="110" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758307 4890 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758312 4890 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758318 4890 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758324 4890 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758330 4890 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758336 4890 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758342 4890 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758377 4890 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758383 4890 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758389 4890 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758395 4890 flags.go:64] FLAG: --pod-cidr="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758400 4890 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758410 4890 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758416 4890 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758422 4890 flags.go:64] FLAG: --pods-per-core="0" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758428 4890 flags.go:64] FLAG: --port="10250" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758434 4890 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758440 4890 flags.go:64] FLAG: --provider-id="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758446 4890 flags.go:64] FLAG: --qos-reserved="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758452 4890 flags.go:64] FLAG: --read-only-port="10255" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758457 4890 flags.go:64] FLAG: --register-node="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758463 4890 flags.go:64] FLAG: --register-schedulable="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758469 4890 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758480 4890 flags.go:64] FLAG: --registry-burst="10" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758485 4890 flags.go:64] FLAG: --registry-qps="5" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758491 4890 flags.go:64] FLAG: --reserved-cpus="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758496 4890 flags.go:64] FLAG: --reserved-memory="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758503 4890 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758508 4890 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758514 4890 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758520 4890 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758526 4890 flags.go:64] FLAG: --runonce="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758532 4890 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758539 4890 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758545 4890 flags.go:64] FLAG: --seccomp-default="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758552 4890 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758559 4890 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758565 4890 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758571 4890 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758577 4890 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758583 4890 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758588 4890 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758594 4890 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758599 4890 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758606 4890 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758612 4890 flags.go:64] FLAG: --system-cgroups="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758617 4890 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758626 4890 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758631 4890 flags.go:64] FLAG: --tls-cert-file="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758637 4890 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758651 4890 flags.go:64] FLAG: --tls-min-version="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758657 4890 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758680 4890 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758687 4890 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758692 4890 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758698 4890 flags.go:64] FLAG: --v="2" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758706 4890 flags.go:64] FLAG: --version="false" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758713 4890 flags.go:64] FLAG: --vmodule="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758720 4890 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.758726 4890 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758867 4890 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758874 4890 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758881 4890 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758887 4890 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758892 4890 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758897 4890 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758904 4890 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758909 4890 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758914 4890 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758920 4890 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758926 4890 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758933 4890 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758939 4890 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758945 4890 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758952 4890 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758957 4890 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758963 4890 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758968 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758973 4890 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758978 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758983 4890 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758987 4890 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758992 4890 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.758997 4890 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759002 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759007 4890 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759011 4890 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759016 4890 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759021 4890 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759026 4890 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759032 4890 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759038 4890 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759043 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759049 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759054 4890 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759059 4890 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759064 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759069 4890 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759074 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759079 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759083 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759094 4890 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759099 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759104 4890 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759109 4890 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759113 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759119 4890 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759124 4890 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759129 4890 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759133 4890 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759138 4890 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759143 4890 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759147 4890 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759153 4890 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759157 4890 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759162 4890 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759167 4890 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759171 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759176 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759181 4890 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759186 4890 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759191 4890 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759196 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759200 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759205 4890 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759210 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759215 4890 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759219 4890 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759224 4890 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759229 4890 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.759234 4890 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.759251 4890 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.766972 4890 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.766995 4890 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767072 4890 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767080 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767087 4890 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767093 4890 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767099 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767105 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767110 4890 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767115 4890 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767121 4890 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767127 4890 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767134 4890 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767143 4890 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767150 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767156 4890 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767162 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767167 4890 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767173 4890 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767179 4890 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767184 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767190 4890 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767196 4890 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767201 4890 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767207 4890 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767212 4890 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767218 4890 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767223 4890 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767229 4890 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767236 4890 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767242 4890 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767248 4890 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767253 4890 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767261 4890 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767267 4890 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767273 4890 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767280 4890 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767287 4890 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767293 4890 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767299 4890 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767305 4890 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767311 4890 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767318 4890 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767323 4890 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767328 4890 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767333 4890 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767338 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767344 4890 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767365 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767371 4890 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767376 4890 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767382 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767387 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767393 4890 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767398 4890 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767403 4890 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767408 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767415 4890 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767420 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767425 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767430 4890 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767436 4890 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767441 4890 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767448 4890 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767453 4890 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767459 4890 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767464 4890 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767469 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767474 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767479 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767484 4890 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767489 4890 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767495 4890 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.767504 4890 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767673 4890 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767682 4890 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767688 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767694 4890 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767700 4890 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767706 4890 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767711 4890 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767717 4890 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767723 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767728 4890 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767733 4890 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767738 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767743 4890 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767749 4890 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767754 4890 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767759 4890 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767765 4890 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767770 4890 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767775 4890 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767780 4890 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767786 4890 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767791 4890 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767797 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767802 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767809 4890 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767817 4890 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767823 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767830 4890 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767836 4890 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767842 4890 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767848 4890 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767854 4890 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767861 4890 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767867 4890 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767874 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767880 4890 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767886 4890 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767892 4890 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767898 4890 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767903 4890 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767909 4890 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767915 4890 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767920 4890 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767926 4890 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767932 4890 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767938 4890 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767943 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767949 4890 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767955 4890 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767960 4890 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767965 4890 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767971 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767977 4890 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767982 4890 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767987 4890 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767992 4890 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.767998 4890 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768003 4890 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768008 4890 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768013 4890 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768018 4890 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768024 4890 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768029 4890 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768034 4890 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768039 4890 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768044 4890 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768050 4890 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768055 4890 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768060 4890 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768065 4890 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.768071 4890 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.768078 4890 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.768240 4890 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.771480 4890 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.771576 4890 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.772114 4890 server.go:997] "Starting client certificate rotation" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.772134 4890 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.772326 4890 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 14:00:58.188373303 +0000 UTC Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.772500 4890 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.777769 4890 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.779418 4890 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.780590 4890 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.790757 4890 log.go:25] "Validated CRI v1 runtime API" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.812567 4890 log.go:25] "Validated CRI v1 image API" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.814232 4890 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.818184 4890 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-15-23-38-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.818228 4890 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.834866 4890 manager.go:217] Machine: {Timestamp:2026-01-21 15:31:57.833343844 +0000 UTC m=+0.194786293 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:18a17417-1572-4a09-b67d-6fcf4ac1275e BootID:f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3a:3c:c1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3a:3c:c1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:21:f3:69 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c2:62:2d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d4:f7:68 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7b:c4:9f Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:98:5c:dd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:97:a9:e3:5b:8f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:45:87:c7:8a:d7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.835156 4890 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.835427 4890 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.836808 4890 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.837063 4890 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.837131 4890 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.837419 4890 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.837432 4890 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.837777 4890 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.837818 4890 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.838061 4890 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.838166 4890 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.839284 4890 kubelet.go:418] "Attempting to sync node with API server" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.839311 4890 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.839365 4890 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.839387 4890 kubelet.go:324] "Adding apiserver pod source" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.839408 4890 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.841465 4890 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.841574 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.841603 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.841693 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.841713 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.841999 4890 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.843473 4890 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844429 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844471 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844485 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844500 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844521 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844535 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844548 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844569 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844584 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844597 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844615 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.844641 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.845165 4890 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.845962 4890 server.go:1280] "Started kubelet" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.848242 4890 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.848287 4890 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.849575 4890 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 15:31:57 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.850284 4890 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.852751 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.852804 4890 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.851857 4890 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.2:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc8c8ab7da846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:31:57.845915718 +0000 UTC m=+0.207358167,LastTimestamp:2026-01-21 15:31:57.845915718 +0000 UTC m=+0.207358167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.853209 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:01:36.90584447 +0000 UTC Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.854858 4890 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.854904 4890 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.855132 4890 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.855317 4890 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.856730 4890 server.go:460] "Adding debug handlers to kubelet server" Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.857451 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.857732 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.857873 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="200ms" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.858485 4890 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.858678 4890 factory.go:55] Registering systemd factory Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.858815 4890 factory.go:221] Registration of the systemd container factory successfully Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.861283 4890 factory.go:153] Registering CRI-O factory Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.861337 4890 factory.go:221] Registration of the crio container factory successfully Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.861405 4890 factory.go:103] Registering Raw factory Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.861435 4890 manager.go:1196] Started watching for new ooms in manager Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.863527 4890 manager.go:319] Starting recovery of all containers Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869607 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869695 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869748 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869761 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869772 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869784 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869797 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869810 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869825 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869837 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869849 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869862 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869872 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869907 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869919 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.869968 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870052 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870065 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870076 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870118 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870130 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870143 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870155 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870166 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870198 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870217 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870237 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870251 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870292 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870330 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870390 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870404 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870482 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870496 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870508 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870519 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870533 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870546 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870560 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870578 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870622 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870638 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870652 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870664 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870676 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870689 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870732 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870748 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870786 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870800 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870812 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870847 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870895 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870909 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870923 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870936 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870966 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870979 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.870991 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871003 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871042 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871053 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871066 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871079 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871112 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871124 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871141 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871154 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871166 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871179 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871190 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871202 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871235 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871249 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871281 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871294 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871306 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871317 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871329 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871341 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871398 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871409 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871421 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871500 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871511 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871528 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871560 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871573 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871722 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871739 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.871750 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874635 4890 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874783 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874815 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874852 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874875 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874911 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874933 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874954 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.874984 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875007 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875043 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875066 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875089 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875117 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875171 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875213 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875296 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875333 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875401 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875441 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875478 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875504 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875541 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875569 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875603 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875624 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875656 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875678 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875698 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875732 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875757 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875779 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875820 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875843 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875874 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875898 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875921 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875950 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.875973 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876012 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876033 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876053 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876082 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876104 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876136 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876156 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876181 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876212 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876234 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876263 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876285 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876306 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876336 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876380 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876408 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876430 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876452 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876478 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876497 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876522 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876548 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876567 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876597 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876616 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876636 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876664 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876683 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876713 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876732 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876752 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876781 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876809 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876842 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876860 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876879 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876906 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876925 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876956 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876978 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.876998 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877025 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877044 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877063 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877091 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877111 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877146 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877166 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877187 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877216 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877237 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877267 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877289 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877312 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877343 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877387 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877415 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877435 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877454 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877484 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877504 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877533 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877553 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877582 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877610 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877630 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877659 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877678 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877704 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877732 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877752 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877780 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877803 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877822 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877849 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877870 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.877891 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.878002 4890 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.878065 4890 reconstruct.go:97] "Volume reconstruction finished" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.878085 4890 reconciler.go:26] "Reconciler: start to sync state" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.901828 4890 manager.go:324] Recovery completed Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.908897 4890 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.912326 4890 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.912484 4890 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.912620 4890 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.912953 4890 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 15:31:57 crc kubenswrapper[4890]: W0121 15:31:57.913477 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.913652 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.917152 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.919073 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.919237 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.919345 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.921736 4890 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.921751 4890 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.921768 4890 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.936832 4890 policy_none.go:49] "None policy: Start" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.938888 4890 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.938914 4890 state_mem.go:35] "Initializing new in-memory state store" Jan 21 15:31:57 crc kubenswrapper[4890]: E0121 15:31:57.955920 4890 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.999866 4890 manager.go:334] "Starting Device Plugin manager" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.999919 4890 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 15:31:57 crc kubenswrapper[4890]: I0121 15:31:57.999931 4890 server.go:79] "Starting device plugin registration server" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.000300 4890 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.000317 4890 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.000712 4890 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.000783 4890 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.000790 4890 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.008780 4890 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.014042 4890 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.014135 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.016247 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.016278 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.016286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.016447 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.016854 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.016925 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.017369 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.017404 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.017422 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.017526 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.017824 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.017930 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.018652 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.018759 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.018780 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.018890 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.018915 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.018946 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.019007 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.019168 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.019234 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.019422 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.019454 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.019483 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.020137 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.020167 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.020191 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.020206 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.020222 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.020231 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.021523 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.021735 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.021759 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.022167 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.022206 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.022224 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.022832 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.022879 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.022893 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.023434 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.023475 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.024385 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.024413 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.024422 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.059003 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="400ms" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080084 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080165 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080205 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080236 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080268 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080423 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080483 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080517 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080550 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080672 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080794 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080846 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080890 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.080972 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.081016 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.100472 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.101689 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.101835 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.101849 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.102057 4890 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.102662 4890 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.2:6443: connect: connection refused" node="crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182224 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182320 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182396 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182439 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182482 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182521 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182563 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182587 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182627 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182669 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182582 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182742 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182749 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182602 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182948 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182597 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182995 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183040 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183058 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183082 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183153 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.182633 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183206 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183244 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183247 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183306 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183342 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183510 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183583 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.183650 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.302759 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.303884 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.303927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.303938 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.303961 4890 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.304387 4890 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.2:6443: connect: connection refused" node="crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.354971 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.366435 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.382512 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.397943 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.399478 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b1010803a6d9cb80ad40c26acbec551f7d2653b3a92d458472a3d537ce7d6e3b WatchSource:0}: Error finding container b1010803a6d9cb80ad40c26acbec551f7d2653b3a92d458472a3d537ce7d6e3b: Status 404 returned error can't find the container with id b1010803a6d9cb80ad40c26acbec551f7d2653b3a92d458472a3d537ce7d6e3b Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.400526 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3d6f4b245e7c904f7c53f3c7517da68027bc208790b7615b075757f4ffc96edf WatchSource:0}: Error finding container 3d6f4b245e7c904f7c53f3c7517da68027bc208790b7615b075757f4ffc96edf: Status 404 returned error can't find the container with id 3d6f4b245e7c904f7c53f3c7517da68027bc208790b7615b075757f4ffc96edf Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.404978 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.409812 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f784c16cb4dcb8225b6838a685c118aae8ddeef1d2dd79c1caa6be1e0531099f WatchSource:0}: Error finding container f784c16cb4dcb8225b6838a685c118aae8ddeef1d2dd79c1caa6be1e0531099f: Status 404 returned error can't find the container with id f784c16cb4dcb8225b6838a685c118aae8ddeef1d2dd79c1caa6be1e0531099f Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.413507 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-720ebb4aa17977cb3d6f220b5f6aec1168a3f6b235cf4058534055a0e1e53395 WatchSource:0}: Error finding container 720ebb4aa17977cb3d6f220b5f6aec1168a3f6b235cf4058534055a0e1e53395: Status 404 returned error can't find the container with id 720ebb4aa17977cb3d6f220b5f6aec1168a3f6b235cf4058534055a0e1e53395 Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.432317 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-514fb1e96cfcb4e2c5e2c8fef7e9ce67e073c439d142664145643405f81ede62 WatchSource:0}: Error finding container 514fb1e96cfcb4e2c5e2c8fef7e9ce67e073c439d142664145643405f81ede62: Status 404 returned error can't find the container with id 514fb1e96cfcb4e2c5e2c8fef7e9ce67e073c439d142664145643405f81ede62 Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.460118 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="800ms" Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.690140 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.690219 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.704850 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.706097 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.706145 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.706157 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.706179 4890 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.706510 4890 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.2:6443: connect: connection refused" node="crc" Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.825057 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.825545 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.851507 4890 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.853672 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:31:47.36382669 +0000 UTC Jan 21 15:31:58 crc kubenswrapper[4890]: W0121 15:31:58.859715 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:58 crc kubenswrapper[4890]: E0121 15:31:58.859789 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.925119 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16" exitCode=0 Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.925263 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.925511 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3d6f4b245e7c904f7c53f3c7517da68027bc208790b7615b075757f4ffc96edf"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.925660 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.927691 4890 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07" exitCode=0 Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.927796 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.927857 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"514fb1e96cfcb4e2c5e2c8fef7e9ce67e073c439d142664145643405f81ede62"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.927967 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.928468 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.928501 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.928514 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.929025 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.929065 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.929077 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.929626 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.929673 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"720ebb4aa17977cb3d6f220b5f6aec1168a3f6b235cf4058534055a0e1e53395"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.931277 4890 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede" exitCode=0 Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.931368 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.931394 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f784c16cb4dcb8225b6838a685c118aae8ddeef1d2dd79c1caa6be1e0531099f"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.931492 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.932485 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.932539 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.932557 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.934184 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.934225 4890 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c" exitCode=0 Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.934270 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.934301 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b1010803a6d9cb80ad40c26acbec551f7d2653b3a92d458472a3d537ce7d6e3b"} Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.934447 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.935682 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.935718 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.935729 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.935797 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.935821 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:58 crc kubenswrapper[4890]: I0121 15:31:58.935835 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:59 crc kubenswrapper[4890]: E0121 15:31:59.261806 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="1.6s" Jan 21 15:31:59 crc kubenswrapper[4890]: W0121 15:31:59.348737 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.2:6443: connect: connection refused Jan 21 15:31:59 crc kubenswrapper[4890]: E0121 15:31:59.348824 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.2:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.506765 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.507939 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.507961 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.507971 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.507995 4890 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:31:59 crc kubenswrapper[4890]: E0121 15:31:59.508341 4890 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.2:6443: connect: connection refused" node="crc" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.796316 4890 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.854465 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:22:08.258094909 +0000 UTC Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.947874 4890 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef" exitCode=0 Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.947919 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.948135 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.949749 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.949806 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.949826 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.952066 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.952145 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.952977 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.953013 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.953025 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.956365 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.956415 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.956433 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.956615 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.958081 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.958145 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.958157 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.960966 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.960999 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.961013 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.961158 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.963271 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.963328 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.963340 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.971911 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.971968 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.971988 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.972006 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9"} Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.972126 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.973182 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.973232 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:31:59 crc kubenswrapper[4890]: I0121 15:31:59.973247 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.534779 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.674054 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.807473 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.855436 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:42:54.8593659 +0000 UTC Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.977937 4890 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04" exitCode=0 Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.978041 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04"} Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.978304 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.979637 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.979671 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.979682 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.984336 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.984555 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5"} Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.984637 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.984694 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.984695 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986172 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986222 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986224 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986264 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986283 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986238 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986226 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986476 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:00 crc kubenswrapper[4890]: I0121 15:32:00.986496 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.108501 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.109634 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.109673 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.109684 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.109711 4890 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.782383 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.856133 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:42:24.515625177 +0000 UTC Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.990775 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5"} Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.990859 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.990891 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.990897 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.990855 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7"} Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.991156 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.992183 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.992215 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.992223 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.992378 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.992417 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.992428 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.992995 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.993030 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:01 crc kubenswrapper[4890]: I0121 15:32:01.993043 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:02 crc kubenswrapper[4890]: I0121 15:32:02.102075 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:32:02 crc kubenswrapper[4890]: I0121 15:32:02.856296 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:40:20.66037018 +0000 UTC Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.001699 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3"} Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.001756 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b"} Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.001776 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0"} Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.001848 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.001876 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.003279 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.003306 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.003317 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.003932 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.003995 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.004023 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.211761 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.211953 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.213330 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.213411 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.213424 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.218452 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.856512 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:01:59.708702616 +0000 UTC Jan 21 15:32:03 crc kubenswrapper[4890]: I0121 15:32:03.870710 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.003792 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.003837 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.005384 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.005429 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.005454 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.005516 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.005543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.005552 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.064818 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 15:32:04 crc kubenswrapper[4890]: I0121 15:32:04.857386 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:22:25.066135578 +0000 UTC Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.006236 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.006312 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.007393 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.007470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.007509 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.007574 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.007596 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.007611 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:05 crc kubenswrapper[4890]: I0121 15:32:05.858450 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:11:33.669272174 +0000 UTC Jan 21 15:32:06 crc kubenswrapper[4890]: I0121 15:32:06.859659 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:22:37.476337265 +0000 UTC Jan 21 15:32:07 crc kubenswrapper[4890]: I0121 15:32:07.861220 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:40:36.842682018 +0000 UTC Jan 21 15:32:08 crc kubenswrapper[4890]: E0121 15:32:08.008868 4890 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:32:08 crc kubenswrapper[4890]: I0121 15:32:08.671335 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:08 crc kubenswrapper[4890]: I0121 15:32:08.671535 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:08 crc kubenswrapper[4890]: I0121 15:32:08.672999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:08 crc kubenswrapper[4890]: I0121 15:32:08.673040 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:08 crc kubenswrapper[4890]: I0121 15:32:08.673052 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:08 crc kubenswrapper[4890]: I0121 15:32:08.677257 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:08 crc kubenswrapper[4890]: I0121 15:32:08.862005 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:43:16.012676777 +0000 UTC Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.016755 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.017731 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.017787 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.017812 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.249072 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.249384 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.250854 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.250907 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.250922 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:09 crc kubenswrapper[4890]: E0121 15:32:09.798518 4890 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.852155 4890 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:32:09 crc kubenswrapper[4890]: I0121 15:32:09.862934 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:13:07.193382059 +0000 UTC Jan 21 15:32:10 crc kubenswrapper[4890]: I0121 15:32:10.535947 4890 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Jan 21 15:32:10 crc kubenswrapper[4890]: I0121 15:32:10.536011 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Jan 21 15:32:10 crc kubenswrapper[4890]: W0121 15:32:10.702499 4890 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:32:10 crc kubenswrapper[4890]: I0121 15:32:10.702592 4890 trace.go:236] Trace[402451066]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:32:00.701) (total time: 10001ms): Jan 21 15:32:10 crc kubenswrapper[4890]: Trace[402451066]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:32:10.702) Jan 21 15:32:10 crc kubenswrapper[4890]: Trace[402451066]: [10.001324316s] [10.001324316s] END Jan 21 15:32:10 crc kubenswrapper[4890]: E0121 15:32:10.702628 4890 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:32:10 crc kubenswrapper[4890]: E0121 15:32:10.862501 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 21 15:32:10 crc kubenswrapper[4890]: I0121 15:32:10.863759 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:32:43.626684577 +0000 UTC Jan 21 15:32:11 crc kubenswrapper[4890]: E0121 15:32:11.111152 4890 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 15:32:11 crc kubenswrapper[4890]: I0121 15:32:11.245289 4890 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 15:32:11 crc kubenswrapper[4890]: I0121 15:32:11.245341 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 15:32:11 crc kubenswrapper[4890]: I0121 15:32:11.672140 4890 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:32:11 crc kubenswrapper[4890]: I0121 15:32:11.672235 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:32:11 crc kubenswrapper[4890]: I0121 15:32:11.864202 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:28:05.553531285 +0000 UTC Jan 21 15:32:12 crc kubenswrapper[4890]: I0121 15:32:12.864321 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:17:51.703805618 +0000 UTC Jan 21 15:32:13 crc kubenswrapper[4890]: I0121 15:32:13.865075 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:22:30.964125093 +0000 UTC Jan 21 15:32:13 crc kubenswrapper[4890]: I0121 15:32:13.976845 4890 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:32:13 crc kubenswrapper[4890]: I0121 15:32:13.994610 4890 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.020188 4890 csr.go:261] certificate signing request csr-vsdsv is approved, waiting to be issued Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.028021 4890 csr.go:257] certificate signing request csr-vsdsv is issued Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.311996 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.313485 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.313527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.313540 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.313568 4890 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:32:14 crc kubenswrapper[4890]: E0121 15:32:14.323406 4890 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 15:32:14 crc kubenswrapper[4890]: I0121 15:32:14.865214 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:46:40.011997325 +0000 UTC Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.030198 4890 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 15:27:14 +0000 UTC, rotation deadline is 2026-11-14 03:48:56.770218205 +0000 UTC Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.030246 4890 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7116h16m41.739975355s for next certificate rotation Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.540959 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.541174 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.542411 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.542449 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.542460 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.547740 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:32:15 crc kubenswrapper[4890]: I0121 15:32:15.865707 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:54:15.123065618 +0000 UTC Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.034000 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.034067 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.035299 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.035462 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.035548 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.249420 4890 trace.go:236] Trace[1487417644]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:32:01.877) (total time: 14371ms): Jan 21 15:32:16 crc kubenswrapper[4890]: Trace[1487417644]: ---"Objects listed" error: 14371ms (15:32:16.249) Jan 21 15:32:16 crc kubenswrapper[4890]: Trace[1487417644]: [14.371650419s] [14.371650419s] END Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.249461 4890 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.249545 4890 trace.go:236] Trace[1110793393]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:32:01.296) (total time: 14952ms): Jan 21 15:32:16 crc kubenswrapper[4890]: Trace[1110793393]: ---"Objects listed" error: 14952ms (15:32:16.249) Jan 21 15:32:16 crc kubenswrapper[4890]: Trace[1110793393]: [14.952928982s] [14.952928982s] END Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.249560 4890 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.249567 4890 trace.go:236] Trace[1503410310]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:32:01.494) (total time: 14754ms): Jan 21 15:32:16 crc kubenswrapper[4890]: Trace[1503410310]: ---"Objects listed" error: 14754ms (15:32:16.249) Jan 21 15:32:16 crc kubenswrapper[4890]: Trace[1503410310]: [14.754971916s] [14.754971916s] END Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.249584 4890 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.250319 4890 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.286613 4890 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54512->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.286614 4890 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54520->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.286759 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54520->192.168.126.11:17697: read: connection reset by peer" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.286670 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54512->192.168.126.11:17697: read: connection reset by peer" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.287216 4890 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.287297 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.687388 4890 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.850863 4890 apiserver.go:52] "Watching apiserver" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.854213 4890 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.854553 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-vrb68","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.854867 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.854925 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.855102 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.855194 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.855229 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.855487 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.855719 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.855937 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.855974 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.856097 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.857754 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.858474 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.858509 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.858580 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.858673 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.858732 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.859213 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.860041 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.860058 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.860190 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.861179 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.861372 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.866002 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:33:18.815255464 +0000 UTC Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.876226 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.890594 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.905626 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.918732 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qnlzh"] Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.919309 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-pflt5"] Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.919555 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.919706 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pflt5" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.921307 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-msckx"] Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.922163 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.922660 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.924733 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.925040 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.925629 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.925675 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.925741 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.926202 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.926932 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.927013 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.927042 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.927054 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.927210 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.927394 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.941793 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954328 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954424 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954459 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954489 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954518 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954545 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954610 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954635 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954660 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954701 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954727 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954752 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954776 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.954800 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.955251 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.955302 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.955612 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.955714 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.955806 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:17.45577892 +0000 UTC m=+19.817221329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.955824 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.955903 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:17.455878702 +0000 UTC m=+19.817321331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.956287 4890 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.956566 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.956699 4890 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.964002 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.964470 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.968070 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.968105 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.968124 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.968207 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:17.46818461 +0000 UTC m=+19.829627169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.972533 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.972581 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.972601 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:16 crc kubenswrapper[4890]: E0121 15:32:16.972685 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:17.472659932 +0000 UTC m=+19.834102461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.973591 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.978627 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.979671 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.980088 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.989421 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:16 crc kubenswrapper[4890]: I0121 15:32:16.998522 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.009214 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.019226 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.030588 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.038479 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.040236 4890 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5" exitCode=255 Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.040285 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5"} Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.046303 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.055274 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.055317 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.055338 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.055727 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.055761 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.055396 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056194 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056236 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056261 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056200 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056281 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056302 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056318 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056255 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056231 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056420 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056333 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056509 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056523 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056551 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056583 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056611 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056640 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056667 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056692 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056715 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056739 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056764 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056790 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056814 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056838 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056861 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056884 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056909 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056931 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056974 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056998 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057021 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057044 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057102 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057125 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057146 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057169 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057190 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057213 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056785 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056793 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057251 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056833 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057237 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057331 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057381 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057407 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057431 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057457 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057484 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057507 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057541 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057567 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057599 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057621 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057644 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057666 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057687 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057709 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057769 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057793 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057813 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057833 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057853 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057876 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057896 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057916 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057938 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057961 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057986 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058008 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058030 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058054 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058076 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058098 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058119 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058140 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058162 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058186 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058207 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058227 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058247 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058301 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058319 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058376 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058399 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058417 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058436 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058457 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058516 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058538 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058560 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058581 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058613 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058634 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058657 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058679 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058700 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058720 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058741 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058820 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058848 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058870 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058891 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058911 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058931 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058952 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058974 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058993 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059012 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059039 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059064 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059086 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059107 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059129 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059153 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059174 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059194 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059224 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059244 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059264 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059285 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059304 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059325 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059361 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059386 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059409 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059430 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059482 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059505 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059526 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059550 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059573 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056972 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.056978 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057026 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057045 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057051 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057146 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057219 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057224 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057230 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057463 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057561 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.057838 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058212 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058237 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058408 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058452 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058511 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058657 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058681 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058730 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058738 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058875 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.058919 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059804 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059003 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059827 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059555 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059579 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059595 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059865 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059884 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059881 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059905 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059925 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059953 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059972 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.059988 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060006 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060024 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060064 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060078 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060083 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060128 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060147 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060166 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060171 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060190 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060185 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060244 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060268 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060273 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060305 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060322 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060337 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060331 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060583 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060602 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060653 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060896 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060918 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060933 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060968 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.060986 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061001 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061018 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061033 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061048 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061065 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061081 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061097 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061111 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061127 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061142 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061160 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061177 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061193 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061208 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061224 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061239 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061254 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061271 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061285 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061300 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061316 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061331 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061361 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061386 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061406 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061423 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061439 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061798 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061823 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061852 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061875 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061898 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061920 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061947 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061971 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.061993 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062016 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062037 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062050 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062064 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062088 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062114 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062136 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062158 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062201 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062229 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-cni-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062266 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062287 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-cni-bin\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062310 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67047065-8bad-4e4d-8b91-47e7ee72ffb6-mcd-auth-proxy-config\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062371 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-system-cni-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062394 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-os-release\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062437 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-system-cni-dir\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062459 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-conf-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062479 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/67047065-8bad-4e4d-8b91-47e7ee72ffb6-rootfs\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062502 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-netns\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062524 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-cni-multus\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062547 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9260bc10-0bda-4046-9b76-78b103f176be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062568 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-k8s-cni-cncf-io\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062589 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-etc-kubernetes\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062609 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-os-release\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062648 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pq4\" (UniqueName: \"kubernetes.io/projected/67047065-8bad-4e4d-8b91-47e7ee72ffb6-kube-api-access-q8pq4\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062682 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-socket-dir-parent\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062704 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-multus-certs\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062748 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmgn\" (UniqueName: \"kubernetes.io/projected/9260bc10-0bda-4046-9b76-78b103f176be-kube-api-access-nvmgn\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062795 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b8a9b5f1-5b7a-48b3-b941-8255b14d809f-hosts-file\") pod \"node-resolver-vrb68\" (UID: \"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\") " pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062816 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eba30f20-e5ad-4888-850d-1715115ab8bd-cni-binary-copy\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062837 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85zh\" (UniqueName: \"kubernetes.io/projected/b8a9b5f1-5b7a-48b3-b941-8255b14d809f-kube-api-access-c85zh\") pod \"node-resolver-vrb68\" (UID: \"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\") " pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062857 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-kubelet\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062880 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062902 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67047065-8bad-4e4d-8b91-47e7ee72ffb6-proxy-tls\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062923 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-cnibin\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062944 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58ncx\" (UniqueName: \"kubernetes.io/projected/eba30f20-e5ad-4888-850d-1715115ab8bd-kube-api-access-58ncx\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062965 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-cnibin\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.062984 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9260bc10-0bda-4046-9b76-78b103f176be-cni-binary-copy\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063004 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-hostroot\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063022 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-daemon-config\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063215 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063451 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063484 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063512 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063759 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.063809 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064026 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064030 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064341 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064412 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064419 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064476 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064608 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064791 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064843 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064901 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065080 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065084 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065098 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065114 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065182 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065572 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065596 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065621 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065622 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065649 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065653 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.065844 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.064747 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066067 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066137 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066289 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066305 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066312 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066333 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066395 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066411 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066650 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066654 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066773 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.067203 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.067795 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.067954 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068014 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068149 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068369 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068411 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068698 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068847 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068855 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068908 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.068935 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066781 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.066891 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.069516 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070054 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070195 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070255 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070445 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070529 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070709 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070716 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070813 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070831 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.070864 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.071006 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.071064 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.071250 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.071477 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.071490 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.071791 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072004 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072026 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072042 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072118 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072253 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072641 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072692 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072724 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072896 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.072922 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073227 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073236 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073253 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073255 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073395 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073455 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073612 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073708 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073765 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073868 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073909 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073920 4890 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.073981 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074005 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074050 4890 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074071 4890 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074088 4890 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074078 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074105 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074300 4890 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074304 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074328 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074342 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074467 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074553 4890 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074575 4890 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074593 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074609 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074622 4890 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074635 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074649 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074662 4890 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074673 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074685 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074697 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074712 4890 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074724 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074735 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074748 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074760 4890 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074772 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074786 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074798 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074809 4890 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074821 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074833 4890 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074845 4890 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074947 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074969 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074983 4890 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.074995 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075026 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075038 4890 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075033 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075058 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075070 4890 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075081 4890 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075094 4890 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075094 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075106 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.075190 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:32:17.575171266 +0000 UTC m=+19.936613675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075229 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075253 4890 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075269 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075274 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075373 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075417 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075440 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075487 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.075678 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.076140 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.076496 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.076518 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.077098 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.077198 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.077501 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.077888 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.077945 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.078024 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.078087 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.078385 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.078863 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.078922 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079301 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079391 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079417 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079439 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079676 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079782 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079891 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.079921 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080230 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080365 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080538 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080587 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080652 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080657 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080845 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.081134 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.081199 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.081188 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.081292 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.080694 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082171 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082236 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082274 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082500 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082739 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082829 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082865 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082935 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.082957 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.083013 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.083260 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.088331 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.098543 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.099061 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.099314 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.101183 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.105948 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.109162 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.116155 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.136028 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.147687 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.147935 4890 scope.go:117] "RemoveContainer" containerID="15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.148165 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.177310 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.177826 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.178027 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184095 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-k8s-cni-cncf-io\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184276 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-cni-multus\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184450 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-k8s-cni-cncf-io\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184444 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-cni-multus\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184641 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9260bc10-0bda-4046-9b76-78b103f176be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184736 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-etc-kubernetes\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184825 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-os-release\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.184929 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8pq4\" (UniqueName: \"kubernetes.io/projected/67047065-8bad-4e4d-8b91-47e7ee72ffb6-kube-api-access-q8pq4\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.185903 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9260bc10-0bda-4046-9b76-78b103f176be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186036 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-etc-kubernetes\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186365 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-socket-dir-parent\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186645 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-multus-certs\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186319 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-os-release\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186745 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmgn\" (UniqueName: \"kubernetes.io/projected/9260bc10-0bda-4046-9b76-78b103f176be-kube-api-access-nvmgn\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186847 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b8a9b5f1-5b7a-48b3-b941-8255b14d809f-hosts-file\") pod \"node-resolver-vrb68\" (UID: \"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\") " pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186875 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eba30f20-e5ad-4888-850d-1715115ab8bd-cni-binary-copy\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186898 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-kubelet\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186917 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186934 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c85zh\" (UniqueName: \"kubernetes.io/projected/b8a9b5f1-5b7a-48b3-b941-8255b14d809f-kube-api-access-c85zh\") pod \"node-resolver-vrb68\" (UID: \"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\") " pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186956 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-cnibin\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186973 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58ncx\" (UniqueName: \"kubernetes.io/projected/eba30f20-e5ad-4888-850d-1715115ab8bd-kube-api-access-58ncx\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.186993 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67047065-8bad-4e4d-8b91-47e7ee72ffb6-proxy-tls\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187010 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-cnibin\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187030 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9260bc10-0bda-4046-9b76-78b103f176be-cni-binary-copy\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187058 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-daemon-config\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187081 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-hostroot\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187098 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-cni-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187126 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-cni-bin\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187144 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-system-cni-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187160 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-os-release\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187236 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-os-release\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187320 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-socket-dir-parent\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187609 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-multus-certs\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187768 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67047065-8bad-4e4d-8b91-47e7ee72ffb6-mcd-auth-proxy-config\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.187873 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eba30f20-e5ad-4888-850d-1715115ab8bd-cni-binary-copy\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188011 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-system-cni-dir\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188106 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-netns\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188202 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-conf-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188340 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-run-netns\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188853 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-cni-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188418 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/67047065-8bad-4e4d-8b91-47e7ee72ffb6-rootfs\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188402 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9260bc10-0bda-4046-9b76-78b103f176be-cni-binary-copy\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188468 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-conf-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188802 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eba30f20-e5ad-4888-850d-1715115ab8bd-multus-daemon-config\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188827 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b8a9b5f1-5b7a-48b3-b941-8255b14d809f-hosts-file\") pod \"node-resolver-vrb68\" (UID: \"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\") " pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.188453 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/67047065-8bad-4e4d-8b91-47e7ee72ffb6-rootfs\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189029 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-system-cni-dir\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189087 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-hostroot\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189201 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-cni-bin\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189283 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67047065-8bad-4e4d-8b91-47e7ee72ffb6-mcd-auth-proxy-config\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189396 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-host-var-lib-kubelet\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189678 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189718 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-system-cni-dir\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.189752 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9260bc10-0bda-4046-9b76-78b103f176be-cnibin\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190031 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eba30f20-e5ad-4888-850d-1715115ab8bd-cnibin\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190126 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190163 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190339 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190493 4890 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190523 4890 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190538 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190548 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190595 4890 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190605 4890 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190617 4890 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190627 4890 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190636 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190702 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190716 4890 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190725 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190735 4890 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190788 4890 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190798 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190809 4890 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190819 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190895 4890 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190904 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190914 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190922 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190934 4890 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190942 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190951 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190963 4890 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190973 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190982 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.190991 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191002 4890 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191010 4890 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191019 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191028 4890 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191043 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191051 4890 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191060 4890 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191068 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191079 4890 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191087 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191095 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191105 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191114 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191123 4890 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191131 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191142 4890 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191151 4890 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191159 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191168 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191178 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191186 4890 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191195 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191203 4890 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191213 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191221 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191230 4890 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191241 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191250 4890 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191259 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191268 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191278 4890 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191287 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191295 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191304 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.191314 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.196283 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.196333 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203145 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203175 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203190 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203206 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203219 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203230 4890 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203241 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203256 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203267 4890 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203277 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203288 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203299 4890 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203310 4890 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203321 4890 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203332 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203343 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203375 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203386 4890 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203398 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203409 4890 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203421 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203432 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203443 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203454 4890 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203466 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203478 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203488 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203500 4890 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203512 4890 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203524 4890 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203535 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203547 4890 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203559 4890 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203574 4890 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203587 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203599 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203611 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203624 4890 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203636 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203648 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203660 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203672 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203685 4890 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203697 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203709 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203721 4890 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203735 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203748 4890 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203760 4890 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203772 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203784 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203797 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203809 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203822 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203833 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203845 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203859 4890 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203870 4890 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203883 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203895 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203908 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203922 4890 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203933 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203946 4890 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203957 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203969 4890 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203980 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.203991 4890 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204003 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204016 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204028 4890 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204043 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204055 4890 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204069 4890 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204082 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204095 4890 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.204107 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.212740 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67047065-8bad-4e4d-8b91-47e7ee72ffb6-proxy-tls\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.218552 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.224636 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmgn\" (UniqueName: \"kubernetes.io/projected/9260bc10-0bda-4046-9b76-78b103f176be-kube-api-access-nvmgn\") pod \"multus-additional-cni-plugins-msckx\" (UID: \"9260bc10-0bda-4046-9b76-78b103f176be\") " pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.224813 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c85zh\" (UniqueName: \"kubernetes.io/projected/b8a9b5f1-5b7a-48b3-b941-8255b14d809f-kube-api-access-c85zh\") pod \"node-resolver-vrb68\" (UID: \"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\") " pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.226314 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8pq4\" (UniqueName: \"kubernetes.io/projected/67047065-8bad-4e4d-8b91-47e7ee72ffb6-kube-api-access-q8pq4\") pod \"machine-config-daemon-qnlzh\" (UID: \"67047065-8bad-4e4d-8b91-47e7ee72ffb6\") " pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.226321 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58ncx\" (UniqueName: \"kubernetes.io/projected/eba30f20-e5ad-4888-850d-1715115ab8bd-kube-api-access-58ncx\") pod \"multus-pflt5\" (UID: \"eba30f20-e5ad-4888-850d-1715115ab8bd\") " pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.236680 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.247856 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pflt5" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.249511 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.256418 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-msckx" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.287402 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.305919 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rp8lm"] Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.306635 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.315659 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.315726 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.315835 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.315904 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.316638 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.316861 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.317258 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.323640 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.350498 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.371312 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.380541 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.389412 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.397236 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.406822 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407624 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-systemd-units\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407669 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovn-node-metrics-cert\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407732 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-ovn\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407758 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-netd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407798 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-ovn-kubernetes\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407820 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407842 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-slash\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407870 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-node-log\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407891 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-var-lib-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407920 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-etc-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407939 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-env-overrides\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407966 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pqd\" (UniqueName: \"kubernetes.io/projected/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-kube-api-access-v7pqd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.407996 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-systemd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.408017 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-config\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.408054 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-bin\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.408078 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-script-lib\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.408157 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-kubelet\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.408189 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-netns\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.408215 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.408241 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-log-socket\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.417161 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.427015 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.438246 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.454786 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.463947 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.473456 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.484382 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.492824 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vrb68" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.508687 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-etc-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.508726 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-env-overrides\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.508754 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-systemd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.508764 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-etc-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.508775 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-config\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.508907 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-systemd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.508972 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7pqd\" (UniqueName: \"kubernetes.io/projected/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-kube-api-access-v7pqd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509020 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509047 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-bin\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509062 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-script-lib\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509086 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-kubelet\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509103 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509142 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509159 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-netns\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509177 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509196 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-log-socket\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509219 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-systemd-units\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509234 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovn-node-metrics-cert\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509252 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509269 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-ovn\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509284 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-netd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509302 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-ovn-kubernetes\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509317 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509322 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-netns\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509333 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-slash\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509363 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-node-log\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509384 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-var-lib-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509403 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-env-overrides\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509438 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-var-lib-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509445 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509480 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509494 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-kubelet\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509505 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-log-socket\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509525 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-bin\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509547 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-systemd-units\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509564 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-config\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509730 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-ovn-kubernetes\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509768 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-slash\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509788 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-ovn\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509813 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-openvswitch\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509822 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-netd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509498 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:18.509480663 +0000 UTC m=+20.870923062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509841 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-node-log\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509703 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509877 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509890 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509933 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:18.509917404 +0000 UTC m=+20.871359903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509755 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.509953 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-script-lib\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509959 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509991 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.510038 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:18.510023356 +0000 UTC m=+20.871465765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.509794 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.510093 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:18.510078498 +0000 UTC m=+20.871520987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.512852 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovn-node-metrics-cert\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.530760 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7pqd\" (UniqueName: \"kubernetes.io/projected/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-kube-api-access-v7pqd\") pod \"ovnkube-node-rp8lm\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.610868 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:17 crc kubenswrapper[4890]: E0121 15:32:17.611125 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:32:18.611105606 +0000 UTC m=+20.972548015 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.645765 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.659612 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d5dcae_8e63_4910_9a28_4f6a5b2d427f.slice/crio-9ab9e65ad4d916ba05cd66018ce0fddbce8fa12078227840f8ccdc6ea8a3ede8 WatchSource:0}: Error finding container 9ab9e65ad4d916ba05cd66018ce0fddbce8fa12078227840f8ccdc6ea8a3ede8: Status 404 returned error can't find the container with id 9ab9e65ad4d916ba05cd66018ce0fddbce8fa12078227840f8ccdc6ea8a3ede8 Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.773042 4890 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773296 4890 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773368 4890 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773367 4890 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773398 4890 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773413 4890 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773433 4890 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773455 4890 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773479 4890 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773499 4890 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773433 4890 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773528 4890 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773547 4890 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773568 4890 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773573 4890 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773588 4890 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773375 4890 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773604 4890 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773628 4890 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773661 4890 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773696 4890 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773741 4890 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773769 4890 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773785 4890 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773789 4890 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773806 4890 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773454 4890 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773797 4890 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773826 4890 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773839 4890 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773823 4890 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: W0121 15:32:17.773848 4890 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.866923 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:24:14.735805065 +0000 UTC Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.917237 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.917982 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.919091 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.919720 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.920684 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.921192 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.921806 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.922713 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.923344 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.924964 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.925560 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.926605 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.927065 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.927640 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.928530 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.929045 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.930004 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.930670 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.933754 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.934390 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.934828 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.935966 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.936713 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.937711 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.938155 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.940938 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.941646 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.942773 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.943772 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.944256 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.945348 4890 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.945480 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.949233 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.950484 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.951521 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.952596 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.954315 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.955346 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.955926 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.958503 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.959748 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.960311 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.960963 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.962227 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.963416 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.963917 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.964889 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.965434 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.966514 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.967037 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.967893 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.968340 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.968849 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.969923 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.970401 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.973660 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:17 crc kubenswrapper[4890]: I0121 15:32:17.984906 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.005456 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.025546 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.041203 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.047024 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vrb68" event={"ID":"b8a9b5f1-5b7a-48b3-b941-8255b14d809f","Type":"ContainerStarted","Data":"319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.047076 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vrb68" event={"ID":"b8a9b5f1-5b7a-48b3-b941-8255b14d809f","Type":"ContainerStarted","Data":"635042f415d4b059120068281c6d156d080a0da40edbbb862c6367154a3e918a"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.049107 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d" exitCode=0 Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.049198 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.049236 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"9ab9e65ad4d916ba05cd66018ce0fddbce8fa12078227840f8ccdc6ea8a3ede8"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.054871 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.055384 4890 generic.go:334] "Generic (PLEG): container finished" podID="9260bc10-0bda-4046-9b76-78b103f176be" containerID="793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444" exitCode=0 Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.055458 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerDied","Data":"793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.055508 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerStarted","Data":"d011942bd74eb7559903d2ab7eeaa2911e6373996e0428ce248bfaf54f0759e1"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.058699 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3bf5a080019fead2848ba5fae4a62e80ab89b0bc3d221fea59c72ceae32b0727"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.060384 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.060449 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6d1c81303a18a2c6fefb2698b0f9ed805f831c67a0441870058fd8a54f927901"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.064990 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.065528 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.065547 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"67b34f0a5e51aa89e646de788b1af8c7adf04723b99ae054a58ea707fe171dc0"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.067763 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.068269 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.069050 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.069660 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.070905 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerStarted","Data":"e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.070938 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerStarted","Data":"1e1056de812e11f34b92586413b6f982db8537d7edba3c6cfee6d45de7b12106"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.072470 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.072501 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.072511 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"43f2dc279dcca3f7c6dfce4178cc454b9f32be50b2a00ef7e7ba8355f506203b"} Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.091486 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.104857 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.120267 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.133901 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.157051 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.173084 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.192246 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.207281 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.222113 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.238676 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.254069 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.277196 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.290748 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.306006 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.323846 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.337717 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.391709 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-twcft"] Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.392413 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.394307 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.396580 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.396643 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.396754 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.406732 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.445396 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.488000 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.526827 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcc746ac-6844-4a76-a68d-ff79281e1561-serviceca\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.526873 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcc746ac-6844-4a76-a68d-ff79281e1561-host\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.526908 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.526947 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.527005 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.527043 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjtx\" (UniqueName: \"kubernetes.io/projected/fcc746ac-6844-4a76-a68d-ff79281e1561-kube-api-access-6pjtx\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527060 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.527085 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527126 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:20.527108905 +0000 UTC m=+22.888551314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527125 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527164 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527185 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527243 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527257 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:20.527233108 +0000 UTC m=+22.888675537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527264 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527168 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527323 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:20.52730815 +0000 UTC m=+22.888750579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527283 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.527398 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:20.527385952 +0000 UTC m=+22.888828451 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.527847 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.565485 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.606977 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.627640 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.627731 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcc746ac-6844-4a76-a68d-ff79281e1561-serviceca\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.627756 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcc746ac-6844-4a76-a68d-ff79281e1561-host\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.627809 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:32:20.627783554 +0000 UTC m=+22.989225953 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.627886 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjtx\" (UniqueName: \"kubernetes.io/projected/fcc746ac-6844-4a76-a68d-ff79281e1561-kube-api-access-6pjtx\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.627819 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcc746ac-6844-4a76-a68d-ff79281e1561-host\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.628862 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcc746ac-6844-4a76-a68d-ff79281e1561-serviceca\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.665136 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.673292 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.680225 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjtx\" (UniqueName: \"kubernetes.io/projected/fcc746ac-6844-4a76-a68d-ff79281e1561-kube-api-access-6pjtx\") pod \"node-ca-twcft\" (UID: \"fcc746ac-6844-4a76-a68d-ff79281e1561\") " pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.684691 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.697201 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.722970 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.743937 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.753722 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.767734 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-twcft" Jan 21 15:32:18 crc kubenswrapper[4890]: W0121 15:32:18.779979 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcc746ac_6844_4a76_a68d_ff79281e1561.slice/crio-b3d33d6e5bfbeb027dfb3c3ebfe191e1e744bdd684b7ea364057fb34a4105986 WatchSource:0}: Error finding container b3d33d6e5bfbeb027dfb3c3ebfe191e1e744bdd684b7ea364057fb34a4105986: Status 404 returned error can't find the container with id b3d33d6e5bfbeb027dfb3c3ebfe191e1e744bdd684b7ea364057fb34a4105986 Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.792426 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.813528 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.832957 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.867528 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.867676 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:41:45.460602336 +0000 UTC Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.873444 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.892528 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.913196 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.913234 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.913291 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.913394 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.913196 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:18 crc kubenswrapper[4890]: E0121 15:32:18.913471 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.913968 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.932771 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.973483 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:32:18 crc kubenswrapper[4890]: I0121 15:32:18.993093 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.012709 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.033481 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.062143 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.072779 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.078614 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.078656 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.078671 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.078682 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.078693 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.078703 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.079940 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-twcft" event={"ID":"fcc746ac-6844-4a76-a68d-ff79281e1561","Type":"ContainerStarted","Data":"c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.079969 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-twcft" event={"ID":"fcc746ac-6844-4a76-a68d-ff79281e1561","Type":"ContainerStarted","Data":"b3d33d6e5bfbeb027dfb3c3ebfe191e1e744bdd684b7ea364057fb34a4105986"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.086329 4890 generic.go:334] "Generic (PLEG): container finished" podID="9260bc10-0bda-4046-9b76-78b103f176be" containerID="fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc" exitCode=0 Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.086394 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerDied","Data":"fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc"} Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.093193 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.112915 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.134166 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.153294 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.173079 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.213298 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.233499 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:32:19 crc kubenswrapper[4890]: E0121 15:32:19.260150 4890 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.272633 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.276900 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.289000 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.293289 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.313406 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.345074 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.353821 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.372845 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.393661 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.429728 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.434095 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.453922 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.473915 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.513665 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.543790 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.581185 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.622269 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.664816 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.703269 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.741834 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.780208 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.830231 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.868375 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:10:58.607456385 +0000 UTC Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.869336 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.903073 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.943474 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:19 crc kubenswrapper[4890]: I0121 15:32:19.980799 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.023953 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.061510 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.093202 4890 generic.go:334] "Generic (PLEG): container finished" podID="9260bc10-0bda-4046-9b76-78b103f176be" containerID="a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27" exitCode=0 Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.093291 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerDied","Data":"a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27"} Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.102734 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.122852 4890 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.164733 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.202041 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.244267 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.285428 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.322538 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.360384 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.403204 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.445540 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.482969 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.521861 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.544667 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.544764 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:24.544742579 +0000 UTC m=+26.906184998 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.544573 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.544857 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.544882 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.544971 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.545010 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545044 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:24.545035887 +0000 UTC m=+26.906478296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545113 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545157 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545167 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545175 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545206 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:24.54518363 +0000 UTC m=+26.906626039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545216 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545235 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.545306 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:24.545280033 +0000 UTC m=+26.906722512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.561409 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.607709 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.645436 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.645639 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:32:24.645605422 +0000 UTC m=+27.007047871 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.648708 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.686666 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.724291 4890 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.726497 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.726579 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.726601 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.726756 4890 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.726974 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.774979 4890 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.775266 4890 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.776467 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.776498 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.776508 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.776525 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.776538 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:20Z","lastTransitionTime":"2026-01-21T15:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.790227 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.794978 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.795017 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.795033 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.795051 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.795066 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:20Z","lastTransitionTime":"2026-01-21T15:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.805698 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.809870 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.813833 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.813872 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.813881 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.813896 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.813905 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:20Z","lastTransitionTime":"2026-01-21T15:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.828575 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.832287 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.832338 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.832379 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.832403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.832417 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:20Z","lastTransitionTime":"2026-01-21T15:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.845843 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.846045 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.849334 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.849389 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.849405 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.849426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.849441 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:20Z","lastTransitionTime":"2026-01-21T15:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.861185 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.861326 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.862815 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.862851 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.862865 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.862883 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.862897 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:20Z","lastTransitionTime":"2026-01-21T15:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.869374 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:23:10.599302151 +0000 UTC Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.914003 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.914005 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.914065 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.914123 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.914204 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:20 crc kubenswrapper[4890]: E0121 15:32:20.914330 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.965614 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.965670 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.965683 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.965702 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:20 crc kubenswrapper[4890]: I0121 15:32:20.965714 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:20Z","lastTransitionTime":"2026-01-21T15:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.069735 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.069790 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.069805 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.069828 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.069843 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.098270 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.103523 4890 generic.go:334] "Generic (PLEG): container finished" podID="9260bc10-0bda-4046-9b76-78b103f176be" containerID="6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7" exitCode=0 Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.103600 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerDied","Data":"6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.111245 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.130269 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.142497 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.152383 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.166587 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.173029 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.173056 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.173063 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.173076 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.173085 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.179961 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.195085 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.208780 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.223100 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.241051 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.275973 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.276022 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.276034 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.276055 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.276067 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.282918 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.320970 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.364916 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.379888 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.379940 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.379956 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.379978 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.379993 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.420575 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.448945 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.483474 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.483514 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.483524 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.483539 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.483548 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.484321 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.524938 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.567270 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.586667 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.586717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.586729 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.586751 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.586764 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.603659 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.646262 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.687999 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.689410 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.689455 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.689470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.689491 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.689506 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.734059 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.766951 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.791501 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.791541 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.791552 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.791568 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.791579 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.800235 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.841864 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.869867 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:26:51.755407017 +0000 UTC Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.888609 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.894884 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.894933 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.894951 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.894978 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.894994 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.926313 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.966511 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:21Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.997770 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.997810 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.997821 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.997838 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:21 crc kubenswrapper[4890]: I0121 15:32:21.997857 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:21Z","lastTransitionTime":"2026-01-21T15:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.011430 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.043850 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.100652 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.100680 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.100688 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.100701 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.100709 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.109402 4890 generic.go:334] "Generic (PLEG): container finished" podID="9260bc10-0bda-4046-9b76-78b103f176be" containerID="f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539" exitCode=0 Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.109455 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerDied","Data":"f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.115816 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.139044 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.158076 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.171601 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.202417 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.203029 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.203054 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.203063 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.203077 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.203086 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.244299 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.282043 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.305647 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.305675 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.305684 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.305698 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.305706 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.360324 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.379008 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.401529 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.408160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.408186 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.408194 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.408207 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.408217 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.456437 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.497769 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.510977 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.511007 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.511017 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.511032 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.511041 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.524576 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.559673 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.599657 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.613766 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.613827 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.613843 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.613867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.613884 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.646483 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.717187 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.717244 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.717260 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.717282 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.717299 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.822441 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.822527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.822547 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.822572 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.822589 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.870065 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:15:47.543915134 +0000 UTC Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.913584 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.913637 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:22 crc kubenswrapper[4890]: E0121 15:32:22.913715 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.913740 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:22 crc kubenswrapper[4890]: E0121 15:32:22.913787 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:22 crc kubenswrapper[4890]: E0121 15:32:22.913910 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.925446 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.925486 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.925496 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.925511 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:22 crc kubenswrapper[4890]: I0121 15:32:22.925521 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:22Z","lastTransitionTime":"2026-01-21T15:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.027765 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.027826 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.027838 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.027856 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.027867 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.123728 4890 generic.go:334] "Generic (PLEG): container finished" podID="9260bc10-0bda-4046-9b76-78b103f176be" containerID="e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.123817 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerDied","Data":"e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.130072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.130108 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.130121 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.130140 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.130158 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.148338 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.165981 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.178989 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.194988 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.208672 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.226538 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.232120 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.232161 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.232174 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.232191 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.232204 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.241078 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.252802 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.265609 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.278967 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.294424 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.308105 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.321509 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.334323 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.334382 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.334397 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.334414 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.334426 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.341777 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.359934 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.437613 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.437694 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.437713 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.437742 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.437761 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.541810 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.541850 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.541873 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.541889 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.541898 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.644684 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.644711 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.644719 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.644731 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.644739 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.747770 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.748281 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.748293 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.748314 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.748327 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.850959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.851037 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.851048 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.851068 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.851100 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.870196 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:47:56.544411634 +0000 UTC Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.953466 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.953504 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.953516 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.953531 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:23 crc kubenswrapper[4890]: I0121 15:32:23.953542 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:23Z","lastTransitionTime":"2026-01-21T15:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.056728 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.056779 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.056790 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.056808 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.056820 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.129511 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" event={"ID":"9260bc10-0bda-4046-9b76-78b103f176be","Type":"ContainerStarted","Data":"9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.137522 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.137880 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.138014 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.149378 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.159372 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.159423 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.159437 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.159457 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.159471 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.160264 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.162649 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.164283 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.177468 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.192534 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.204402 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.213974 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.224525 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.235158 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.246146 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.261665 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.261702 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.261713 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.261731 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.261743 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.263188 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.282077 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.292477 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.303553 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.322007 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.343506 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.358808 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.364435 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.364465 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.364476 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.364493 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.364504 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.374795 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.388950 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.403239 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.415905 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.426974 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.440915 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.450399 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.462310 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.466614 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.466650 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.466658 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.466675 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.466685 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.476236 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.490199 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.502194 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.513648 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.531298 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.548135 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.569514 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.569566 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.569585 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.569622 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.569642 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.581133 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.581179 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.581205 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.581238 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581324 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581386 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581416 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581437 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581453 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581401 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:32.58138561 +0000 UTC m=+34.942828029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581497 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581513 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:32.581488193 +0000 UTC m=+34.942930602 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581541 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581568 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:32.581558695 +0000 UTC m=+34.943001104 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581575 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.581659 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:32.581632626 +0000 UTC m=+34.943075075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.673390 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.673461 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.673483 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.673513 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.673536 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.682210 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.682446 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:32:32.682409428 +0000 UTC m=+35.043851877 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.776850 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.776930 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.776955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.776984 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.777003 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.870884 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:25:45.079505213 +0000 UTC Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.879955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.880027 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.880050 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.880078 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.880100 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.913747 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.913808 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.913988 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.914026 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.914128 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:24 crc kubenswrapper[4890]: E0121 15:32:24.914272 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.982112 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.982142 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.982150 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.982162 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:24 crc kubenswrapper[4890]: I0121 15:32:24.982171 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:24Z","lastTransitionTime":"2026-01-21T15:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.084533 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.084601 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.084623 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.084650 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.084673 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.140983 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.187535 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.187572 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.187581 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.187598 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.187609 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.290047 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.290108 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.290124 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.290150 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.290168 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.393165 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.393210 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.393222 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.393240 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.393256 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.496449 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.496497 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.496509 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.496525 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.496538 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.599571 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.599615 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.599631 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.599652 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.599664 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.702677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.702712 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.702723 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.702738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.702750 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.805879 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.805946 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.805964 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.805987 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.806004 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.871996 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:24:12.126600058 +0000 UTC Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.908451 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.908513 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.908530 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.908552 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:25 crc kubenswrapper[4890]: I0121 15:32:25.908568 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:25Z","lastTransitionTime":"2026-01-21T15:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.010603 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.010636 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.010647 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.010664 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.010675 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.113231 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.113265 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.113275 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.113290 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.113301 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.143183 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.215498 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.215547 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.215558 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.215577 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.215590 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.318420 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.318452 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.318463 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.318476 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.318486 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.420808 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.420856 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.420872 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.420893 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.420906 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.523014 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.523057 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.523068 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.523085 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.523095 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.625391 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.625433 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.625442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.625456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.625467 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.728011 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.728049 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.728057 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.728072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.728081 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.830679 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.830726 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.830746 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.830777 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.830790 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.872413 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:50:39.825124666 +0000 UTC Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.914136 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.914246 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:26 crc kubenswrapper[4890]: E0121 15:32:26.914285 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.914421 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:26 crc kubenswrapper[4890]: E0121 15:32:26.914458 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:26 crc kubenswrapper[4890]: E0121 15:32:26.914558 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.932813 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.932868 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.932886 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.932908 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:26 crc kubenswrapper[4890]: I0121 15:32:26.932924 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:26Z","lastTransitionTime":"2026-01-21T15:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.035675 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.035737 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.035761 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.035790 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.035813 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.138760 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.138802 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.138811 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.138829 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.138838 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.241477 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.241552 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.241570 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.242188 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.242388 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.345579 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.345640 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.345658 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.345690 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.345706 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.448546 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.448603 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.448615 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.448635 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.448647 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.551813 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.551877 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.551894 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.551918 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.551937 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.654400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.654472 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.654495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.654533 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.654557 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.756294 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.756326 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.756335 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.756366 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.756375 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.859221 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.859267 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.859278 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.859295 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.859306 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.872854 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:54:42.93806826 +0000 UTC Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.946785 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.961322 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.961378 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.961389 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.961404 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.961417 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:27Z","lastTransitionTime":"2026-01-21T15:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.968897 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.983908 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:27 crc kubenswrapper[4890]: I0121 15:32:27.999128 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.024005 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.045264 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.064749 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.064962 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.065813 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.065613 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.066117 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.066141 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.079168 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.096474 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.109601 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.121088 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.138947 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.150875 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/0.log" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.153606 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b" exitCode=1 Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.153667 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.155740 4890 scope.go:117] "RemoveContainer" containerID="e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.156042 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.168257 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.168477 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.168585 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.168665 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.168740 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.171114 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.186692 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.199625 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.210470 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.221834 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.233978 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.246709 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.271717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.271771 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.271788 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.271813 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.271830 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.275694 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.298935 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.314937 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.328437 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.337560 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.348208 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.358121 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.370806 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.373679 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.373803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.373880 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.373967 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.374040 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.381984 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.392936 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.476737 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.477081 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.477256 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.477499 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.477672 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.580239 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.580273 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.580282 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.580298 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.580309 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.683471 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.683543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.683569 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.683601 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.683624 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.785723 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.785760 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.785771 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.785788 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.785800 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.873986 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:35:30.977930009 +0000 UTC Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.888763 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.888804 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.888825 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.888843 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.888856 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.913268 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.913336 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.913448 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:28 crc kubenswrapper[4890]: E0121 15:32:28.913510 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:28 crc kubenswrapper[4890]: E0121 15:32:28.913563 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:28 crc kubenswrapper[4890]: E0121 15:32:28.913624 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.990765 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.990795 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.990803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.990815 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:28 crc kubenswrapper[4890]: I0121 15:32:28.990823 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:28Z","lastTransitionTime":"2026-01-21T15:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.093186 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.093211 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.093219 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.093233 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.093241 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.157343 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/0.log" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.159964 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.160066 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.170538 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.179781 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.195430 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.195934 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.195981 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.195993 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.196013 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.196025 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.209037 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.222793 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.236161 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.246076 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.258170 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.269261 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.284585 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.296724 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.298389 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.298421 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.298429 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.298443 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.298451 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.310054 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.321252 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.351085 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.372976 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.400802 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.400847 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.400856 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.400870 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.400880 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.502642 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.502707 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.502729 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.502757 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.502778 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.555994 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz"] Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.556380 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.559166 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.561153 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.574775 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.590847 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.605214 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.605250 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.605260 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.605276 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.605287 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.606802 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.628073 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.628715 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3278cad5-c53a-400a-9d2d-22a98bda2773-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.628808 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3278cad5-c53a-400a-9d2d-22a98bda2773-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.628888 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3278cad5-c53a-400a-9d2d-22a98bda2773-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.628968 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ftf\" (UniqueName: \"kubernetes.io/projected/3278cad5-c53a-400a-9d2d-22a98bda2773-kube-api-access-42ftf\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.639833 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.660411 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.678009 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.694292 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.708085 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.708138 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.708157 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.708226 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.708254 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.708997 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.723945 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.729996 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3278cad5-c53a-400a-9d2d-22a98bda2773-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.730093 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3278cad5-c53a-400a-9d2d-22a98bda2773-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.730158 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ftf\" (UniqueName: \"kubernetes.io/projected/3278cad5-c53a-400a-9d2d-22a98bda2773-kube-api-access-42ftf\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.730281 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3278cad5-c53a-400a-9d2d-22a98bda2773-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.731556 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3278cad5-c53a-400a-9d2d-22a98bda2773-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.731561 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3278cad5-c53a-400a-9d2d-22a98bda2773-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.737178 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3278cad5-c53a-400a-9d2d-22a98bda2773-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.740292 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.759206 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.761815 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ftf\" (UniqueName: \"kubernetes.io/projected/3278cad5-c53a-400a-9d2d-22a98bda2773-kube-api-access-42ftf\") pod \"ovnkube-control-plane-749d76644c-nzzdz\" (UID: \"3278cad5-c53a-400a-9d2d-22a98bda2773\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.775336 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.793806 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.811495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.811538 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.811550 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.811566 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.811578 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.823778 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.847250 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.867664 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.875219 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:12:50.206379757 +0000 UTC Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.915118 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.915174 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.915192 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.915219 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:29 crc kubenswrapper[4890]: I0121 15:32:29.915237 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:29Z","lastTransitionTime":"2026-01-21T15:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.017943 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.018224 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.018284 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.018343 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.018427 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.121553 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.121633 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.121654 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.121681 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.121698 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.165558 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" event={"ID":"3278cad5-c53a-400a-9d2d-22a98bda2773","Type":"ContainerStarted","Data":"4d88556232905233c8a4fd15b015768117b99a987daa97d05b7d44bdc3f8365c"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.224260 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.224725 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.224900 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.225152 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.225295 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.328959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.329184 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.329263 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.329345 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.329522 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.431444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.431494 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.431505 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.431523 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.431533 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.534263 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.534302 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.534319 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.534335 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.534346 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.637213 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.637273 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.637291 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.637315 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.637332 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.693224 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-j9mfr"] Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.693758 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.693829 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.710043 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.723407 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.733456 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.739924 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.739957 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.739966 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.739981 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.739991 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.749531 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.763413 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.790311 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.822668 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.836520 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.841601 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7gn\" (UniqueName: \"kubernetes.io/projected/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-kube-api-access-5m7gn\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.841678 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.841709 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.841726 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.841736 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.841778 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.841789 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.849899 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.860431 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.874421 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.875427 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:42:54.335889537 +0000 UTC Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.885389 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.898417 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.907903 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.913682 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.913758 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.913784 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.913683 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.913879 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.913967 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.918923 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.935234 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.935317 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.935532 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.935542 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.935555 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.935566 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.943007 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m7gn\" (UniqueName: \"kubernetes.io/projected/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-kube-api-access-5m7gn\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.943082 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.943200 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.943270 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:31.443251331 +0000 UTC m=+33.804693750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.951345 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.953391 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.956182 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.956223 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.956237 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.956254 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.956267 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.960501 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m7gn\" (UniqueName: \"kubernetes.io/projected/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-kube-api-access-5m7gn\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.970429 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.973997 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.974037 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.974045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.974062 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.974071 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:30 crc kubenswrapper[4890]: E0121 15:32:30.985093 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.989285 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.989321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.989334 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.989367 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:30 crc kubenswrapper[4890]: I0121 15:32:30.989380 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:30Z","lastTransitionTime":"2026-01-21T15:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: E0121 15:32:31.003807 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.008130 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.008166 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.008199 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.008220 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.008232 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: E0121 15:32:31.019256 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: E0121 15:32:31.019405 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.021194 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.021254 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.021270 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.021293 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.021309 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.124332 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.124405 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.124425 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.124445 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.124459 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.171534 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/1.log" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.172253 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/0.log" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.175898 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876" exitCode=1 Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.175955 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.176055 4890 scope.go:117] "RemoveContainer" containerID="e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.176935 4890 scope.go:117] "RemoveContainer" containerID="545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876" Jan 21 15:32:31 crc kubenswrapper[4890]: E0121 15:32:31.177122 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.179323 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" event={"ID":"3278cad5-c53a-400a-9d2d-22a98bda2773","Type":"ContainerStarted","Data":"22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.179408 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" event={"ID":"3278cad5-c53a-400a-9d2d-22a98bda2773","Type":"ContainerStarted","Data":"63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.198974 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.215647 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.228207 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.228259 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.228279 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.228304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.228322 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.232403 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.244015 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.255502 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.267476 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.280065 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.292516 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.309928 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.327320 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.333271 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.333320 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.333331 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.333364 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.333379 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.360184 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.384120 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\" OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 15:32:29.475146 6333 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0121 15:32:29.476604 6333 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0121 15:32:29.476616 6333 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0121 15:32:29.476616 6333 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0121 15:32:29.475185 6333 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qnlzh\\\\nI0121 15:32:29.476628 6333 ovnkube_controller.go:804] Add Logic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.398541 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.410670 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.422733 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.436454 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.436500 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.436515 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.436549 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.436568 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.438013 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.449588 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:31 crc kubenswrapper[4890]: E0121 15:32:31.449815 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:31 crc kubenswrapper[4890]: E0121 15:32:31.449906 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:32.449881177 +0000 UTC m=+34.811323626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.450737 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.466933 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.478029 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.489090 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.503914 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.514499 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.531509 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.538296 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.538337 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.538369 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.538387 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.538432 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.552589 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.569841 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.583968 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.599374 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.615705 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.629285 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.640402 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.640450 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.640465 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.640484 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.640498 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.641766 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.655309 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.668515 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.692329 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.716432 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\" OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 15:32:29.475146 6333 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0121 15:32:29.476604 6333 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0121 15:32:29.476616 6333 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0121 15:32:29.476616 6333 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0121 15:32:29.475185 6333 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qnlzh\\\\nI0121 15:32:29.476628 6333 ovnkube_controller.go:804] Add Logic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.742981 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.743023 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.743031 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.743046 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.743054 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.845296 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.845392 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.845418 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.845449 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.845471 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.875813 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:58:11.404744992 +0000 UTC Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.913790 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:31 crc kubenswrapper[4890]: E0121 15:32:31.913921 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.953275 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.953543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.953559 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.954045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:31 crc kubenswrapper[4890]: I0121 15:32:31.954058 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:31Z","lastTransitionTime":"2026-01-21T15:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.056946 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.057003 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.057015 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.057034 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.057046 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.111978 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.141632 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.159151 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.159191 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.159200 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.159216 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.159226 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.164885 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\" OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 15:32:29.475146 6333 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0121 15:32:29.476604 6333 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0121 15:32:29.476616 6333 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0121 15:32:29.476616 6333 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0121 15:32:29.475185 6333 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qnlzh\\\\nI0121 15:32:29.476628 6333 ovnkube_controller.go:804] Add Logic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.179159 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.184762 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/1.log" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.193159 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.207563 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.227193 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.239622 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.260590 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.261965 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.262003 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.262014 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.262033 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.262044 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.274992 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.289111 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.304469 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.318099 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.332050 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.345691 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.362657 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.364255 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.364290 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.364301 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.364321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.364332 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.377997 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.394930 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.462843 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.463091 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.463196 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:34.46317096 +0000 UTC m=+36.824613389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.467142 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.467176 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.467184 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.467198 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.467207 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.569906 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.569958 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.569969 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.569987 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.570000 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.664938 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.664996 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.665015 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.665033 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665074 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665142 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665157 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665167 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665169 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:48.665146784 +0000 UTC m=+51.026589203 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665211 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:48.665197695 +0000 UTC m=+51.026640094 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665258 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665309 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665419 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665454 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:48.665410751 +0000 UTC m=+51.026853200 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665461 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.665561 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:48.665533504 +0000 UTC m=+51.026975953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.672814 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.672859 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.672871 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.672890 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.672904 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.765943 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.766213 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:32:48.766182422 +0000 UTC m=+51.127624861 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.776011 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.776053 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.776064 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.776081 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.776094 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.876046 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:40:54.067206699 +0000 UTC Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.879234 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.879307 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.879336 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.879403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.879424 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.914027 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.914110 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.914184 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.914254 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.914347 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:32 crc kubenswrapper[4890]: E0121 15:32:32.914437 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.982035 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.982288 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.982365 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.982442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:32 crc kubenswrapper[4890]: I0121 15:32:32.982500 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:32Z","lastTransitionTime":"2026-01-21T15:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.085947 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.086022 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.086044 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.086076 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.086098 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.189160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.189426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.189497 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.189579 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.189650 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.293097 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.293136 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.293149 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.293167 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.293178 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.396068 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.396098 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.396106 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.396119 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.396128 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.498647 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.498697 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.498708 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.498725 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.498737 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.601076 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.601113 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.601121 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.601137 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.601146 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.704387 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.704436 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.704448 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.704464 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.704475 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.808277 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.808348 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.808426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.808458 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.808479 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.876718 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:06:52.659723616 +0000 UTC Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.911031 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.911095 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.911113 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.911138 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.911157 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:33Z","lastTransitionTime":"2026-01-21T15:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:33 crc kubenswrapper[4890]: I0121 15:32:33.913307 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:33 crc kubenswrapper[4890]: E0121 15:32:33.913515 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.013732 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.013809 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.013832 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.013863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.013887 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.117147 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.117235 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.117261 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.117293 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.117317 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.221876 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.221941 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.221958 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.221987 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.222003 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.335700 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.335759 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.335776 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.335843 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.336423 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.438663 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.438704 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.438717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.438731 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.438741 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.487502 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:34 crc kubenswrapper[4890]: E0121 15:32:34.487690 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:34 crc kubenswrapper[4890]: E0121 15:32:34.487761 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:38.487743457 +0000 UTC m=+40.849185876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.542215 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.542270 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.542289 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.542309 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.542324 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.645190 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.645243 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.645253 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.645270 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.645281 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.747055 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.747099 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.747113 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.747128 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.747137 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.849683 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.849738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.849756 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.849779 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.849797 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.877145 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:28:20.665862406 +0000 UTC Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.913311 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.913418 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:34 crc kubenswrapper[4890]: E0121 15:32:34.913486 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.913517 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:34 crc kubenswrapper[4890]: E0121 15:32:34.913657 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:34 crc kubenswrapper[4890]: E0121 15:32:34.913762 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.953271 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.953323 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.953334 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.953387 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:34 crc kubenswrapper[4890]: I0121 15:32:34.953400 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:34Z","lastTransitionTime":"2026-01-21T15:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.056243 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.056307 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.056323 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.056379 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.056407 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.159911 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.159985 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.159998 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.160020 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.160077 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.263495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.263572 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.263594 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.263622 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.263645 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.366738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.366794 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.366811 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.366836 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.366856 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.470343 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.470455 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.470480 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.470510 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.470534 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.574417 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.574490 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.574527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.574556 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.574578 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.676888 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.676940 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.676955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.676976 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.676988 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.780481 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.780560 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.780586 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.780616 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.780643 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.877929 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:46:28.457787769 +0000 UTC Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.883863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.883955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.883977 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.884007 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.884027 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.913936 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:35 crc kubenswrapper[4890]: E0121 15:32:35.914189 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.986312 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.986398 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.986412 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.986429 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:35 crc kubenswrapper[4890]: I0121 15:32:35.986445 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:35Z","lastTransitionTime":"2026-01-21T15:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.088848 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.088908 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.088926 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.088951 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.088970 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.192021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.192070 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.192094 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.192125 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.192147 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.295261 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.295339 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.295393 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.295421 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.295440 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.398456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.398519 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.398536 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.398593 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.398618 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.501321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.501437 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.501459 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.501484 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.501501 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.604465 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.604545 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.604569 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.604600 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.604621 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.707017 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.707087 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.707100 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.707118 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.707130 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.809305 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.809395 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.809419 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.809446 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.809466 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.878812 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:48:06.120676748 +0000 UTC Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913335 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913512 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913698 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913762 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:36 crc kubenswrapper[4890]: E0121 15:32:36.913756 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913787 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913821 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913862 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:36Z","lastTransitionTime":"2026-01-21T15:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:36 crc kubenswrapper[4890]: I0121 15:32:36.913904 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:36 crc kubenswrapper[4890]: E0121 15:32:36.914063 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:36 crc kubenswrapper[4890]: E0121 15:32:36.914177 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.016393 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.016464 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.016482 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.016505 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.016523 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.119710 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.119757 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.119775 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.119798 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.119815 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.222768 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.222848 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.222867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.222895 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.222917 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.325912 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.325955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.325989 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.326012 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.326029 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.427662 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.427709 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.427721 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.427738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.427748 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.530720 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.530767 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.530778 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.530794 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.530806 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.632196 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.632238 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.632249 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.632266 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.632278 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.734333 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.734400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.734411 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.734430 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.734439 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.837113 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.837182 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.837215 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.837243 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.837264 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.879869 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:14:11.001878583 +0000 UTC Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.913781 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:37 crc kubenswrapper[4890]: E0121 15:32:37.913984 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.934861 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.939584 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.939655 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.939722 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.939784 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.939869 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:37Z","lastTransitionTime":"2026-01-21T15:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.948998 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.963041 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.975376 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:37 crc kubenswrapper[4890]: I0121 15:32:37.991016 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.021195 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.042269 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.042327 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.042344 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.042396 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.042415 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.058896 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69563452c073918455f10eeef2bc2a6a2867df809815dc44b7da22eab2c8b7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:27Z\\\",\\\"message\\\":\\\"ssservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.573580 6196 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 15:32:26.573726 6196 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:32:26.574104 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574119 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:32:26.574433 6196 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:26.574453 6196 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:32:26.574458 6196 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:32:26.574503 6196 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:26.574511 6196 factory.go:656] Stopping watch factory\\\\nI0121 15:32:26.574565 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:26.574525 6196 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\" OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 15:32:29.475146 6333 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0121 15:32:29.476604 6333 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0121 15:32:29.476616 6333 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0121 15:32:29.476616 6333 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0121 15:32:29.475185 6333 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qnlzh\\\\nI0121 15:32:29.476628 6333 ovnkube_controller.go:804] Add Logic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.075635 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.089116 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.104619 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.124696 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.143602 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.145167 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.145211 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.145256 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.145274 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.145286 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.158017 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.172978 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.184576 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.195669 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.206185 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.247976 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.248015 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.248026 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.248041 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.248052 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.349942 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.349980 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.349992 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.350007 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.350021 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.452918 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.452966 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.452977 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.452994 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.453006 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.528943 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:38 crc kubenswrapper[4890]: E0121 15:32:38.529307 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:38 crc kubenswrapper[4890]: E0121 15:32:38.529488 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:32:46.529452324 +0000 UTC m=+48.890894793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.556064 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.556116 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.556132 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.556156 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.556172 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.659655 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.659719 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.659739 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.659765 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.659786 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.761815 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.761862 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.761873 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.761889 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.761900 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.864748 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.864826 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.864846 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.864885 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.864904 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.880277 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:55:07.351819459 +0000 UTC Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.913647 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.913710 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.913787 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:38 crc kubenswrapper[4890]: E0121 15:32:38.913923 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:38 crc kubenswrapper[4890]: E0121 15:32:38.914025 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:38 crc kubenswrapper[4890]: E0121 15:32:38.914106 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.967264 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.967315 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.967330 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.967346 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:38 crc kubenswrapper[4890]: I0121 15:32:38.967375 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:38Z","lastTransitionTime":"2026-01-21T15:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.070763 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.070822 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.070838 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.070863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.070879 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.174970 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.175006 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.175016 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.175029 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.175037 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.278187 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.278259 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.278290 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.278320 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.278341 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.381754 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.381814 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.381830 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.381853 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.381872 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.484610 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.484651 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.484666 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.484683 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.484693 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.587300 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.587396 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.587432 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.587462 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.587481 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.690963 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.691027 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.691049 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.691077 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.691098 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.794833 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.794881 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.794892 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.794910 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.794922 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.881390 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:38:43.890612866 +0000 UTC Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.897472 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.897516 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.897528 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.897549 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.897566 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:39Z","lastTransitionTime":"2026-01-21T15:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:39 crc kubenswrapper[4890]: I0121 15:32:39.914109 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:39 crc kubenswrapper[4890]: E0121 15:32:39.914316 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.000922 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.000990 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.001014 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.001046 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.001070 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.104394 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.104452 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.104469 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.104494 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.104512 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.208090 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.208161 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.208183 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.208215 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.208236 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.310646 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.310719 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.310738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.310764 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.310782 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.415406 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.416051 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.416074 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.416094 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.416107 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.519072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.519128 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.519146 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.519168 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.519184 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.621793 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.621857 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.621877 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.621900 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.621918 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.724518 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.724622 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.724638 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.724662 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.724679 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.828794 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.828867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.828884 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.828912 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.828932 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.881810 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:39:50.093215953 +0000 UTC Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.913682 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.913781 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:40 crc kubenswrapper[4890]: E0121 15:32:40.913958 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.913981 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:40 crc kubenswrapper[4890]: E0121 15:32:40.914177 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:40 crc kubenswrapper[4890]: E0121 15:32:40.914551 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.931277 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.931393 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.931403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.931424 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:40 crc kubenswrapper[4890]: I0121 15:32:40.931438 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:40Z","lastTransitionTime":"2026-01-21T15:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.033536 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.033604 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.033620 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.033648 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.033669 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.128839 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.128884 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.128896 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.128914 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.128927 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: E0121 15:32:41.144059 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.149446 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.149523 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.149550 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.149578 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.149597 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: E0121 15:32:41.168966 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.174129 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.174169 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.174186 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.174243 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.174261 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: E0121 15:32:41.193806 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.202858 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.202948 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.202975 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.203009 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.203032 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: E0121 15:32:41.219968 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.225045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.225101 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.225123 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.225145 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.225161 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: E0121 15:32:41.242482 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:41 crc kubenswrapper[4890]: E0121 15:32:41.242787 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.245018 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.245066 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.245084 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.245105 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.245121 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.348053 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.348130 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.348158 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.348190 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.348214 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.451395 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.451478 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.451499 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.451535 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.451558 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.554707 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.554778 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.554798 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.554825 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.554843 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.657924 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.657981 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.657995 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.658016 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.658031 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.760595 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.760643 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.760655 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.760673 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.760685 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.862901 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.862965 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.862980 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.863002 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.863018 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.881960 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:13:33.428611288 +0000 UTC Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.913374 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:41 crc kubenswrapper[4890]: E0121 15:32:41.913579 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.966289 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.966341 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.966381 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.966400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:41 crc kubenswrapper[4890]: I0121 15:32:41.966412 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:41Z","lastTransitionTime":"2026-01-21T15:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.068254 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.068291 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.068303 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.068319 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.068330 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.170959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.171037 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.171059 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.171086 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.171111 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.273412 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.273456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.273466 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.273482 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.273494 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.376315 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.376372 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.376383 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.376402 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.376414 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.479672 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.479721 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.479738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.479785 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.479802 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.583084 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.583151 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.583166 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.583185 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.583198 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.686268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.686307 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.686321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.686341 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.686376 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.790035 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.790091 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.790109 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.790133 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.790150 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.882247 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:14:41.872605506 +0000 UTC Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.893304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.893386 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.893403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.893424 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.893436 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.913672 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.913720 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.913672 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:42 crc kubenswrapper[4890]: E0121 15:32:42.913849 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:42 crc kubenswrapper[4890]: E0121 15:32:42.914012 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:42 crc kubenswrapper[4890]: E0121 15:32:42.914413 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.914816 4890 scope.go:117] "RemoveContainer" containerID="545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.930127 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.944336 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.959607 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.973545 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.992718 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.995520 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.995573 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.995590 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.995662 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:42 crc kubenswrapper[4890]: I0121 15:32:42.995940 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:42Z","lastTransitionTime":"2026-01-21T15:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.012915 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.030462 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\" OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 15:32:29.475146 6333 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0121 15:32:29.476604 6333 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0121 15:32:29.476616 6333 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0121 15:32:29.476616 6333 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0121 15:32:29.475185 6333 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qnlzh\\\\nI0121 15:32:29.476628 6333 ovnkube_controller.go:804] Add Logic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.043717 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.057270 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.068940 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.081980 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.098338 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.098394 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.098402 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.098416 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.098430 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.104112 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.116264 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.133077 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.148299 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.165189 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.179735 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.200473 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.200526 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.200543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.200567 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.200582 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.303245 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.303281 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.303292 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.303310 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.303321 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.405161 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.405266 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.405285 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.405312 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.405331 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.507606 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.507652 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.507664 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.507680 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.507692 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.609992 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.610029 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.610036 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.610049 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.610060 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.712475 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.712527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.712548 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.712576 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.712597 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.815490 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.815534 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.815543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.815560 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.815568 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.882692 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:28:22.667287219 +0000 UTC Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.913597 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:43 crc kubenswrapper[4890]: E0121 15:32:43.913852 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.919423 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.919710 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.919927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.920145 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:43 crc kubenswrapper[4890]: I0121 15:32:43.920404 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:43Z","lastTransitionTime":"2026-01-21T15:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.024087 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.024129 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.024143 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.024160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.024172 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.126403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.126449 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.126460 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.126477 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.126489 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.229595 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.229660 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.229677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.229699 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.229717 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.236994 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/1.log" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.240097 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.240223 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.273386 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\" OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 15:32:29.475146 6333 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0121 15:32:29.476604 6333 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0121 15:32:29.476616 6333 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0121 15:32:29.476616 6333 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0121 15:32:29.475185 6333 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qnlzh\\\\nI0121 15:32:29.476628 6333 ovnkube_controller.go:804] Add Logic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.303321 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.319340 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.333719 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.333789 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.333807 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.333832 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.333849 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.344004 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.362676 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.378733 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.393886 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.411022 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.424873 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.437010 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.437069 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.437087 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.437112 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.437129 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.440067 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.459726 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.481107 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.505074 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.521173 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.537931 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.540400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.540470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.540497 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.540527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.540550 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.552161 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.566889 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.644121 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.644196 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.644218 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.644248 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.644269 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.747580 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.747651 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.747666 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.747684 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.747698 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.850300 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.850391 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.850415 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.850446 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.850467 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.883546 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:21:15.367055119 +0000 UTC Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.913156 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:44 crc kubenswrapper[4890]: E0121 15:32:44.913314 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.913183 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:44 crc kubenswrapper[4890]: E0121 15:32:44.913603 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.913168 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:44 crc kubenswrapper[4890]: E0121 15:32:44.913664 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.953857 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.953923 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.953945 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.953991 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:44 crc kubenswrapper[4890]: I0121 15:32:44.954013 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:44Z","lastTransitionTime":"2026-01-21T15:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.059139 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.059225 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.059251 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.059278 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.059297 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.162614 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.162677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.162696 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.162727 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.162748 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.245896 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/2.log" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.246888 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/1.log" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.250992 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0" exitCode=1 Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.251046 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.251100 4890 scope.go:117] "RemoveContainer" containerID="545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.252464 4890 scope.go:117] "RemoveContainer" containerID="962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0" Jan 21 15:32:45 crc kubenswrapper[4890]: E0121 15:32:45.252739 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.269816 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.269854 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.269865 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.269883 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.269894 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.277755 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.293684 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.308581 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.327584 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.353704 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.368754 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.373065 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.373117 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.373133 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.373157 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.373175 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.385593 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.400537 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.420462 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.436245 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.455506 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://545edc571e896823223157877f2984a3e8fc51434a55806f5947cdfd47581876\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\" OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 15:32:29.475146 6333 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0121 15:32:29.476604 6333 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0121 15:32:29.476616 6333 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0121 15:32:29.476616 6333 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0121 15:32:29.475185 6333 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qnlzh\\\\nI0121 15:32:29.476628 6333 ovnkube_controller.go:804] Add Logic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.475946 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.475979 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.475987 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.476001 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.476010 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.486286 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.497255 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.517252 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.530264 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.545090 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.557132 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.579276 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.579317 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.579327 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.579342 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.579370 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.681090 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.681127 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.681138 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.681153 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.681165 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.783693 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.783733 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.783744 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.783761 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.783771 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.883703 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:03:37.086535225 +0000 UTC Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.886655 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.886691 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.886702 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.886718 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.886730 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.913705 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:45 crc kubenswrapper[4890]: E0121 15:32:45.913856 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.989277 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.989367 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.989379 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.989395 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:45 crc kubenswrapper[4890]: I0121 15:32:45.989405 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:45Z","lastTransitionTime":"2026-01-21T15:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.092463 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.092514 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.092531 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.092558 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.092575 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.195692 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.196321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.196477 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.196510 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.196529 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.257808 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/2.log" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.299501 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.299565 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.299591 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.299621 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.299645 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.403226 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.403306 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.403332 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.403390 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.403409 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.506573 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.506626 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.506643 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.506665 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.506682 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.609343 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.609446 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.609463 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.609488 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.609505 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.623942 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:46 crc kubenswrapper[4890]: E0121 15:32:46.624125 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:46 crc kubenswrapper[4890]: E0121 15:32:46.624241 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:33:02.624210883 +0000 UTC m=+64.985653372 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.713163 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.713252 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.713274 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.713302 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.713324 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.815681 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.815729 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.815738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.815752 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.815762 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.884774 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:34:09.169198069 +0000 UTC Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.913395 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.913403 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.913656 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:46 crc kubenswrapper[4890]: E0121 15:32:46.913795 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:46 crc kubenswrapper[4890]: E0121 15:32:46.914153 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:46 crc kubenswrapper[4890]: E0121 15:32:46.914427 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.918484 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.918550 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.918563 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.918580 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:46 crc kubenswrapper[4890]: I0121 15:32:46.918592 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:46Z","lastTransitionTime":"2026-01-21T15:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.021600 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.021663 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.021689 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.021717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.021736 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.124005 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.124058 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.124072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.124092 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.124106 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.226496 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.226536 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.226546 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.226558 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.226568 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.328661 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.328990 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.328999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.329013 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.329022 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.431370 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.431411 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.431425 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.431443 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.431454 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.506452 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.507282 4890 scope.go:117] "RemoveContainer" containerID="962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0" Jan 21 15:32:47 crc kubenswrapper[4890]: E0121 15:32:47.507454 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.525932 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.533624 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.533678 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.533695 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.533721 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.533739 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.547175 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.568103 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.584101 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.600279 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.625786 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.636474 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.636703 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.636717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.636735 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.636768 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.651858 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.664557 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.677190 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.696685 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.708168 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.719732 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.732740 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.739254 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.739304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.739317 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.739340 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.739369 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.747382 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.757756 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.769153 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.781703 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.842023 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.842067 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.842079 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.842096 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.842108 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.885816 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:28:51.525885903 +0000 UTC Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.913584 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:47 crc kubenswrapper[4890]: E0121 15:32:47.913860 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.935249 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.944381 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.944417 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.944428 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.944444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.944453 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:47Z","lastTransitionTime":"2026-01-21T15:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.949315 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.963492 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.973369 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.983856 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:47 crc kubenswrapper[4890]: I0121 15:32:47.995641 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.006940 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.028104 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.046971 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.047016 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.047024 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.047038 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.047046 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.048716 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.070048 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.084333 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.101219 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.114244 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.129848 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.143583 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.148670 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.148717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.148728 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.148747 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.148759 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.163661 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.182808 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.250803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.250842 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.250853 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.250866 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.250876 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.353382 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.353418 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.353428 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.353444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.353459 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.455825 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.455867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.455876 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.455891 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.455900 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.559458 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.559500 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.559511 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.559528 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.559538 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.661796 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.661841 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.661854 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.661870 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.661881 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.750112 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.750347 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.750476 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.750578 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750200 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750729 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:33:20.750712672 +0000 UTC m=+83.112155071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750472 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750807 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:33:20.750793124 +0000 UTC m=+83.112235533 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750539 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750832 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750843 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.750871 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:33:20.750862365 +0000 UTC m=+83.112304774 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.751089 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.751183 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.751256 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.751392 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:33:20.751383078 +0000 UTC m=+83.112825487 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.764272 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.764314 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.764324 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.764338 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.764361 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.851875 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.852222 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:33:20.85217391 +0000 UTC m=+83.213616369 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.866959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.866993 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.867002 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.867016 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.867027 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.886568 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:32:01.599660755 +0000 UTC Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.913208 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.913299 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.913337 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.913605 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.913697 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:48 crc kubenswrapper[4890]: E0121 15:32:48.913836 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.969300 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.969390 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.969413 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.969438 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:48 crc kubenswrapper[4890]: I0121 15:32:48.969454 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:48Z","lastTransitionTime":"2026-01-21T15:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.072347 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.072426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.072442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.072465 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.072483 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.174415 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.174448 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.174457 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.174488 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.174499 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.276442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.276481 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.276493 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.276508 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.276518 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.378627 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.378658 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.378665 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.378677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.378686 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.482181 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.482231 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.482241 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.482259 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.482270 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.586286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.586575 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.586801 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.587021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.587195 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.690123 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.690174 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.690195 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.690222 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.690242 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.793197 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.793268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.793292 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.793320 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.793344 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.887714 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:00:36.488547008 +0000 UTC Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.896214 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.896268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.896285 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.896309 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.896326 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.913868 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:49 crc kubenswrapper[4890]: E0121 15:32:49.914104 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.999260 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.999312 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.999328 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.999347 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:49 crc kubenswrapper[4890]: I0121 15:32:49.999387 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:49Z","lastTransitionTime":"2026-01-21T15:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.102861 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.102922 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.102939 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.102965 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.102982 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.206792 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.206833 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.206844 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.206859 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.206873 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.310199 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.310249 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.310266 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.310288 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.310302 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.413034 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.413106 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.413119 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.413137 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.413176 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.516039 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.516080 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.516092 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.516108 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.516121 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.619167 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.619210 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.619218 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.619238 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.619250 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.722067 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.722144 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.722169 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.722198 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.722222 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.814544 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.825574 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.825600 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.825608 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.825620 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.825629 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.828897 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.832566 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.844944 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.859455 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.875316 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.888904 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:15:41.690734976 +0000 UTC Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.892066 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.903173 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.913526 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.913526 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.913535 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:50 crc kubenswrapper[4890]: E0121 15:32:50.913687 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:50 crc kubenswrapper[4890]: E0121 15:32:50.913796 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:50 crc kubenswrapper[4890]: E0121 15:32:50.913931 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.919462 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.927599 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.927836 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.927849 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.927866 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.927876 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:50Z","lastTransitionTime":"2026-01-21T15:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.932876 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.947065 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.958786 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.972373 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.985591 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:50 crc kubenswrapper[4890]: I0121 15:32:50.999083 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:50Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.013115 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.027181 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.030597 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.030626 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.030634 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.030647 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.030656 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.050590 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.070872 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.133998 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.134055 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.134067 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.134092 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.134107 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.237097 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.237156 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.237172 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.237195 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.237207 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.245659 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.245718 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.245733 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.245755 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.245774 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: E0121 15:32:51.263939 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.269346 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.269432 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.269447 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.269469 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.269485 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: E0121 15:32:51.286321 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.291009 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.291066 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.291079 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.291098 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.291464 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: E0121 15:32:51.306102 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.310334 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.310403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.310415 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.310436 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.310449 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: E0121 15:32:51.325439 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.329551 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.329614 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.329626 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.329643 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.329657 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: E0121 15:32:51.343066 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:51 crc kubenswrapper[4890]: E0121 15:32:51.343170 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.345096 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.345164 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.345189 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.345225 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.345246 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.447867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.447944 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.447968 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.447998 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.448021 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.550613 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.550848 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.550865 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.550892 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.550907 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.654217 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.654272 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.654287 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.654306 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.654346 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.757154 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.757189 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.757201 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.757217 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.757228 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.859379 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.859425 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.859437 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.859455 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.859467 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.889956 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:35:28.453050236 +0000 UTC Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.913290 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:51 crc kubenswrapper[4890]: E0121 15:32:51.913451 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.962012 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.962067 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.962083 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.962106 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:51 crc kubenswrapper[4890]: I0121 15:32:51.962124 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:51Z","lastTransitionTime":"2026-01-21T15:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.065268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.065325 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.065342 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.065406 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.065426 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.167709 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.167800 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.167823 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.167848 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.167867 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.273420 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.273481 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.273498 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.273515 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.273552 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.377273 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.377340 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.377848 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.378123 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.378135 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.480166 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.480235 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.480245 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.480261 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.480294 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.583197 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.583261 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.583278 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.583302 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.583319 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.685714 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.685749 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.685760 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.685777 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.685788 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.787911 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.787971 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.787982 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.787997 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.788007 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.890104 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.890081 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:58:35.352066597 +0000 UTC Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.890161 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.890229 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.890260 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.890281 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.913661 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.913675 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:52 crc kubenswrapper[4890]: E0121 15:32:52.913842 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:52 crc kubenswrapper[4890]: E0121 15:32:52.913976 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.913681 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:52 crc kubenswrapper[4890]: E0121 15:32:52.914089 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.992663 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.992725 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.992756 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.992772 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:52 crc kubenswrapper[4890]: I0121 15:32:52.992783 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:52Z","lastTransitionTime":"2026-01-21T15:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.095747 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.095795 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.095806 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.095827 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.095840 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.199020 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.199097 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.199132 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.199163 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.199193 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.300953 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.301002 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.301019 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.301037 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.301053 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.403961 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.404025 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.404037 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.404053 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.404069 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.507158 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.507230 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.507247 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.507273 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.507291 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.609129 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.609158 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.609166 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.609178 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.609187 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.711644 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.711688 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.711696 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.711709 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.711719 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.813842 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.813892 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.813903 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.813920 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.813933 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.891046 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:02:02.786722785 +0000 UTC Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.913752 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:53 crc kubenswrapper[4890]: E0121 15:32:53.913935 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.915839 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.915902 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.915924 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.915952 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:53 crc kubenswrapper[4890]: I0121 15:32:53.915974 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:53Z","lastTransitionTime":"2026-01-21T15:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.019382 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.019444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.019467 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.019495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.019529 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.122637 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.122704 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.122724 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.122753 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.122778 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.225336 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.225437 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.225472 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.225504 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.225529 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.327845 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.327932 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.327967 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.327996 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.328016 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.431499 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.431556 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.431572 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.431597 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.431621 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.534043 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.534096 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.534118 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.534145 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.534169 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.637894 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.637959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.637977 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.638003 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.638021 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.740867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.740944 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.740964 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.740996 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.741022 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.845546 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.845622 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.845665 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.845700 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.845725 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.892108 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:17:57.944995263 +0000 UTC Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.913476 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.913500 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.913506 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:54 crc kubenswrapper[4890]: E0121 15:32:54.913653 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:54 crc kubenswrapper[4890]: E0121 15:32:54.913750 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:54 crc kubenswrapper[4890]: E0121 15:32:54.913843 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.949093 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.949160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.949178 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.949201 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:54 crc kubenswrapper[4890]: I0121 15:32:54.949220 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:54Z","lastTransitionTime":"2026-01-21T15:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.052661 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.052713 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.052739 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.052764 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.052783 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:55Z","lastTransitionTime":"2026-01-21T15:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.156025 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.156062 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.156071 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.156083 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.156096 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:55Z","lastTransitionTime":"2026-01-21T15:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.583182 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.583230 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.583243 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.583261 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.583272 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:55Z","lastTransitionTime":"2026-01-21T15:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.686913 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.686967 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.686979 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.686996 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.687008 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:55Z","lastTransitionTime":"2026-01-21T15:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.791091 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.791198 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.791216 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.791621 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.791635 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:55Z","lastTransitionTime":"2026-01-21T15:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.892316 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:15:10.097445478 +0000 UTC Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.894109 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.894170 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.894187 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.894211 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.894226 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:55Z","lastTransitionTime":"2026-01-21T15:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.913447 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:55 crc kubenswrapper[4890]: E0121 15:32:55.913593 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.997410 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.997464 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.997481 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.997501 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:55 crc kubenswrapper[4890]: I0121 15:32:55.997516 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:55Z","lastTransitionTime":"2026-01-21T15:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.100409 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.100458 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.100468 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.100486 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.100495 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.203318 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.203363 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.203372 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.203387 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.203396 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.306002 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.306075 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.306092 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.306119 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.306137 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.409590 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.409643 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.409656 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.409674 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.409685 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.512330 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.512814 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.512985 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.513189 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.513453 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.616179 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.616710 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.616957 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.617171 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.617437 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.720533 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.720932 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.721129 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.721336 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.721572 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.825593 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.825689 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.825746 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.825778 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.825801 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.892891 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:03:59.602692828 +0000 UTC Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.913256 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.913278 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:56 crc kubenswrapper[4890]: E0121 15:32:56.913501 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.913520 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:56 crc kubenswrapper[4890]: E0121 15:32:56.913698 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:56 crc kubenswrapper[4890]: E0121 15:32:56.914061 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.928189 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.928252 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.928274 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.928301 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:56 crc kubenswrapper[4890]: I0121 15:32:56.928322 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:56Z","lastTransitionTime":"2026-01-21T15:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.030554 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.030648 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.030673 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.030705 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.030728 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.133714 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.134124 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.134275 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.134468 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.134624 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.237182 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.237247 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.237264 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.237287 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.237304 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.340186 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.340261 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.340278 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.340300 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.340317 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.443262 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.443402 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.443420 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.443443 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.443459 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.546559 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.546621 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.546638 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.546661 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.546678 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.649437 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.649486 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.649502 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.649524 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.649540 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.752796 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.752862 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.752887 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.752912 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.752929 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.857742 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.857784 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.857803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.857822 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.857834 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.893261 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:25:23.200245544 +0000 UTC Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.913122 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:57 crc kubenswrapper[4890]: E0121 15:32:57.913254 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.930910 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.949731 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.960451 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.960507 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.960530 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.960555 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.960568 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:57Z","lastTransitionTime":"2026-01-21T15:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.964117 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.977390 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:57 crc kubenswrapper[4890]: I0121 15:32:57.992411 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.006472 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.027268 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.043879 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.062404 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.062778 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.062822 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.062838 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.062860 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.062876 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.083105 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.097697 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.121166 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.141956 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.153990 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.166087 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.166396 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.166587 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.166870 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.167070 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.172095 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.188939 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.204086 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.224002 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:32:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.270019 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.270081 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.270113 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.270156 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.270179 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.373761 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.373819 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.373838 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.373863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.373883 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.476471 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.476803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.477086 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.477296 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.477525 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.580600 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.580670 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.580694 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.580724 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.580747 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.683071 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.683114 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.683128 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.683147 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.683160 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.786176 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.786521 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.786698 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.786897 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.787243 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.890138 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.890679 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.890824 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.890973 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.891109 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.894366 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:50:56.547799317 +0000 UTC Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.914194 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.914216 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:32:58 crc kubenswrapper[4890]: E0121 15:32:58.914304 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.914522 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:32:58 crc kubenswrapper[4890]: E0121 15:32:58.914544 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:32:58 crc kubenswrapper[4890]: E0121 15:32:58.914685 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.994072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.994145 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.994171 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.994200 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:58 crc kubenswrapper[4890]: I0121 15:32:58.994222 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:58Z","lastTransitionTime":"2026-01-21T15:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.096794 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.097139 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.097151 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.097168 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.097181 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.199332 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.199405 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.199419 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.199435 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.199446 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.302325 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.302387 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.302398 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.302412 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.302422 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.404881 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.404915 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.404926 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.404943 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.404955 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.507957 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.508025 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.508048 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.508075 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.508096 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.610532 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.610582 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.610596 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.610613 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.610625 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.713436 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.713499 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.713518 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.713540 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.713555 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.816304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.816402 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.816424 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.816456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.816477 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.895218 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:40:15.294044204 +0000 UTC Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.914040 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:32:59 crc kubenswrapper[4890]: E0121 15:32:59.914211 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.919644 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.919772 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.919805 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.919837 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:32:59 crc kubenswrapper[4890]: I0121 15:32:59.919860 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:32:59Z","lastTransitionTime":"2026-01-21T15:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.022568 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.022661 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.022702 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.022736 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.022760 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.125828 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.125904 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.125927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.125955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.125978 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.228699 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.228731 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.228742 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.228757 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.228768 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.331072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.331123 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.331144 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.331172 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.331195 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.433433 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.433463 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.433471 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.433483 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.433491 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.535644 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.535716 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.535735 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.535758 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.535774 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.638532 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.638578 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.638594 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.638616 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.638630 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.740469 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.740526 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.740543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.740569 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.740586 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.843966 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.844007 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.844021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.844037 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.844052 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.895587 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:00:15.61818267 +0000 UTC Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.913918 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.913988 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:00 crc kubenswrapper[4890]: E0121 15:33:00.914043 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.914066 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:00 crc kubenswrapper[4890]: E0121 15:33:00.914214 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:00 crc kubenswrapper[4890]: E0121 15:33:00.914252 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.946403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.946438 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.946449 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.946464 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:00 crc kubenswrapper[4890]: I0121 15:33:00.946475 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:00Z","lastTransitionTime":"2026-01-21T15:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.049049 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.049323 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.049456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.049572 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.049669 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.152649 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.152686 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.152694 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.152709 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.152718 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.255300 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.255343 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.255381 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.255406 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.255418 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.357616 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.357675 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.357694 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.357717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.357734 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.460631 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.460693 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.460710 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.460737 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.460754 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.569645 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.569680 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.569692 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.569707 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.569719 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.671609 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.671651 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.671660 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.671673 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.671681 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.714934 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.714977 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.714989 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.715003 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.715016 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.728679 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.732429 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.732461 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.732471 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.732485 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.732502 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.743052 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.745746 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.745839 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.745918 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.745990 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.746060 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.756751 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.759714 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.759810 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.759891 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.759970 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.760059 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.778659 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.782309 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.782378 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.782390 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.782409 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.782421 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.794962 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.795132 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.796396 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.796430 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.796448 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.796468 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.796481 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.895975 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:06:10.051050177 +0000 UTC Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.898927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.898967 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.898979 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.898995 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.899007 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:01Z","lastTransitionTime":"2026-01-21T15:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.913180 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.913328 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:01 crc kubenswrapper[4890]: I0121 15:33:01.913942 4890 scope.go:117] "RemoveContainer" containerID="962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0" Jan 21 15:33:01 crc kubenswrapper[4890]: E0121 15:33:01.914160 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.001614 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.001653 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.001665 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.001680 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.001691 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.104470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.104511 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.104522 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.104539 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.104553 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.207407 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.207738 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.207911 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.208058 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.208191 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.311461 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.311818 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.311896 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.311975 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.312045 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.415165 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.415209 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.415220 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.415237 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.415251 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.517532 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.517572 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.517582 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.517596 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.517606 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.620076 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.620117 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.620128 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.620143 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.620166 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.648593 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:02 crc kubenswrapper[4890]: E0121 15:33:02.648792 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:33:02 crc kubenswrapper[4890]: E0121 15:33:02.648920 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:33:34.648887208 +0000 UTC m=+97.010329827 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.722861 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.722927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.722939 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.722957 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.722970 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.826085 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.826131 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.826141 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.826158 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.826189 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.896870 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:03:14.112908005 +0000 UTC Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.913221 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.913381 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.913388 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:02 crc kubenswrapper[4890]: E0121 15:33:02.913536 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:02 crc kubenswrapper[4890]: E0121 15:33:02.913656 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:02 crc kubenswrapper[4890]: E0121 15:33:02.913792 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.928730 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.928787 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.928804 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.928830 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:02 crc kubenswrapper[4890]: I0121 15:33:02.928851 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:02Z","lastTransitionTime":"2026-01-21T15:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.031724 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.031770 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.031778 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.031793 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.031803 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.134591 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.134628 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.134637 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.134650 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.134659 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.237481 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.237532 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.237544 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.237560 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.237571 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.340740 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.340814 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.340833 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.340860 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.340883 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.443223 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.443278 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.443297 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.443320 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.443336 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.546082 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.546135 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.546146 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.546164 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.546175 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.610630 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/0.log" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.610688 4890 generic.go:334] "Generic (PLEG): container finished" podID="eba30f20-e5ad-4888-850d-1715115ab8bd" containerID="e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7" exitCode=1 Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.610717 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerDied","Data":"e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.611296 4890 scope.go:117] "RemoveContainer" containerID="e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.636697 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.648316 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.648398 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.648417 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.648440 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.648458 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.659766 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.672139 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.684692 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.701427 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.714933 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.729531 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.737525 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.747917 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.750770 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.750805 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.750813 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.750827 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.750836 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.758248 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.770496 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.789581 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.804970 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.823476 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.838824 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.852782 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.852984 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.853074 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.853249 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.853426 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.854500 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.870661 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.885809 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:03Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.897845 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:09:18.019516792 +0000 UTC Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.913131 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:03 crc kubenswrapper[4890]: E0121 15:33:03.913341 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.955919 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.956130 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.956225 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.956305 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:03 crc kubenswrapper[4890]: I0121 15:33:03.956391 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:03Z","lastTransitionTime":"2026-01-21T15:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.058337 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.058431 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.058455 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.058486 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.058510 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.161248 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.161273 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.161281 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.161293 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.161301 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.264219 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.264263 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.264272 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.264286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.264296 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.367227 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.367266 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.367278 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.367297 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.367310 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.469808 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.469848 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.469863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.469879 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.469893 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.572044 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.572110 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.572130 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.572157 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.572173 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.616371 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/0.log" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.616744 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerStarted","Data":"68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.642193 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.655456 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.674767 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.674795 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.674804 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.674820 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.674830 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.675706 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.690212 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.702550 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.713859 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.733192 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.746495 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.761750 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.774621 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.777417 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.777462 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.777473 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.777493 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.777506 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.791182 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.803636 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.817634 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.830674 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.846004 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.860215 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.873956 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.880040 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.880101 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.880119 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.880144 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.880159 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.886012 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:04Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.898288 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:17:29.443096014 +0000 UTC Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.913755 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.913865 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.913907 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:04 crc kubenswrapper[4890]: E0121 15:33:04.914054 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:04 crc kubenswrapper[4890]: E0121 15:33:04.914152 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:04 crc kubenswrapper[4890]: E0121 15:33:04.914275 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.983276 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.983335 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.983364 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.983380 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:04 crc kubenswrapper[4890]: I0121 15:33:04.983389 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:04Z","lastTransitionTime":"2026-01-21T15:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.085377 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.085436 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.085454 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.085478 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.085496 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.188125 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.188174 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.188185 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.188202 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.188213 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.291996 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.292060 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.292078 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.292104 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.292125 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.395176 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.395220 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.395233 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.395251 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.395262 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.498047 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.498084 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.498119 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.498133 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.498142 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.600896 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.600930 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.600942 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.600959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.600971 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.703235 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.703272 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.703284 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.703306 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.703319 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.805626 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.805667 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.805677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.805690 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.805699 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.898865 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:23:37.339196032 +0000 UTC Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.907795 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.907836 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.907847 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.907863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.907874 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:05Z","lastTransitionTime":"2026-01-21T15:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:05 crc kubenswrapper[4890]: I0121 15:33:05.914188 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:05 crc kubenswrapper[4890]: E0121 15:33:05.914341 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.010645 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.010716 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.010736 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.010765 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.010783 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.112614 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.112644 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.112652 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.112664 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.112672 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.215863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.215926 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.215944 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.215968 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.215986 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.319547 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.319597 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.319613 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.319636 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.319652 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.422284 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.422336 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.422368 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.422387 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.422400 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.529050 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.529128 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.529150 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.529179 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.529200 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.631420 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.631451 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.631500 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.631518 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.631528 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.733445 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.733520 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.733542 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.733567 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.733585 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.836127 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.836171 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.836183 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.836200 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.836213 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.899316 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:28:03.742148742 +0000 UTC Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.913573 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.913684 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:06 crc kubenswrapper[4890]: E0121 15:33:06.913784 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.913806 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:06 crc kubenswrapper[4890]: E0121 15:33:06.914055 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:06 crc kubenswrapper[4890]: E0121 15:33:06.914173 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.926545 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.939438 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.939509 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.939544 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.939564 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:06 crc kubenswrapper[4890]: I0121 15:33:06.939578 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:06Z","lastTransitionTime":"2026-01-21T15:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.042080 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.042129 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.042141 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.042161 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.042177 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.145653 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.145713 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.145729 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.145755 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.145772 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.248271 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.248311 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.248321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.248334 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.248343 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.350607 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.350694 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.350707 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.350727 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.350739 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.453343 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.453403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.453411 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.453426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.453438 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.555457 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.555487 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.555497 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.555512 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.555522 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.657254 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.657291 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.657299 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.657315 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.657328 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.759442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.759488 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.759501 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.759517 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.759526 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.861943 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.861995 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.862007 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.862023 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.862035 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.900550 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:48:15.582591949 +0000 UTC Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.913191 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:07 crc kubenswrapper[4890]: E0121 15:33:07.913360 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.929568 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:07Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.946655 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:07Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.964630 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.964687 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.964700 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.964722 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.964733 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:07Z","lastTransitionTime":"2026-01-21T15:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.966009 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:07Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.978058 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:07Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:07 crc kubenswrapper[4890]: I0121 15:33:07.990568 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:07Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.002241 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1462f01-5bca-4532-a218-b1e897c2bde3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.017682 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.032575 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.047274 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.063713 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.067696 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.067737 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.067749 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.067766 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.067777 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.078181 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.102241 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.120314 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.144664 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.156339 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.168439 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.170257 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.170288 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.170302 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.170321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.170334 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.181661 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.198386 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.212007 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.272991 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.273045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.273059 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.273078 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.273095 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.375333 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.375386 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.375397 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.375413 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.375423 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.477776 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.477830 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.477847 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.477867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.477883 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.581033 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.581077 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.581085 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.581098 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.581107 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.683219 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.683250 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.683257 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.683271 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.683279 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.785959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.786007 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.786017 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.786032 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.786040 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.887779 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.888123 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.888135 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.888149 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.888159 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.901064 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:50:05.894310108 +0000 UTC Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.913411 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.913450 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.913411 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:08 crc kubenswrapper[4890]: E0121 15:33:08.913569 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:08 crc kubenswrapper[4890]: E0121 15:33:08.913714 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:08 crc kubenswrapper[4890]: E0121 15:33:08.913785 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.990419 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.990459 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.990467 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.990484 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:08 crc kubenswrapper[4890]: I0121 15:33:08.990494 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:08Z","lastTransitionTime":"2026-01-21T15:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.092451 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.092486 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.092499 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.092517 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.092532 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.194462 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.194492 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.194501 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.194513 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.194523 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.296375 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.296451 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.296468 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.296494 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.296512 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.398638 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.398679 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.398690 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.398709 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.398721 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.501423 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.501477 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.501489 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.501510 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.501523 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.604251 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.604293 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.604303 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.604319 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.604330 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.706799 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.706849 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.706863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.706881 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.706893 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.809898 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.809947 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.809956 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.809973 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.809983 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.902155 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:24:36.428391346 +0000 UTC Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.912665 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.912728 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.912740 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.912760 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.912777 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:09Z","lastTransitionTime":"2026-01-21T15:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:09 crc kubenswrapper[4890]: I0121 15:33:09.913200 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:09 crc kubenswrapper[4890]: E0121 15:33:09.913395 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.015492 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.015529 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.015539 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.015552 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.015568 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.117447 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.117495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.117507 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.117522 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.117542 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.219885 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.219931 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.219942 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.219959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.219969 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.322162 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.322194 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.322203 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.322215 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.322223 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.424687 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.424736 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.424750 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.424767 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.424778 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.527466 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.527534 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.527545 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.527566 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.527578 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.629617 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.629654 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.629664 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.629679 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.629688 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.731999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.732036 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.732045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.732057 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.732066 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.834529 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.834571 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.834584 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.834599 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.834611 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.902399 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:34:39.223319811 +0000 UTC Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.913703 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.913780 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.913799 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:10 crc kubenswrapper[4890]: E0121 15:33:10.913822 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:10 crc kubenswrapper[4890]: E0121 15:33:10.913894 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:10 crc kubenswrapper[4890]: E0121 15:33:10.913976 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.936595 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.936632 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.936641 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.936655 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:10 crc kubenswrapper[4890]: I0121 15:33:10.936664 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:10Z","lastTransitionTime":"2026-01-21T15:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.042711 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.042785 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.042809 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.042858 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.042889 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.145999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.146049 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.146061 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.146076 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.146086 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.248743 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.248795 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.248806 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.248825 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.248834 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.351190 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.351232 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.351244 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.351259 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.351271 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.454013 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.454049 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.454061 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.454076 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.454088 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.556531 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.556565 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.556576 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.556591 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.556601 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.659202 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.659242 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.659253 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.659268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.659278 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.761745 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.761796 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.761814 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.761836 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.761852 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.864457 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.864514 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.864535 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.864570 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.864593 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.903235 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:58:03.820429742 +0000 UTC Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.914089 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:11 crc kubenswrapper[4890]: E0121 15:33:11.914320 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.960680 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.960742 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.960773 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.960810 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.960829 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:11 crc kubenswrapper[4890]: E0121 15:33:11.980755 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.986605 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.986650 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.986667 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.986689 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:11 crc kubenswrapper[4890]: I0121 15:33:11.986705 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:11Z","lastTransitionTime":"2026-01-21T15:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.005823 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.011247 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.011304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.011324 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.011375 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.011393 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.028786 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.033695 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.033771 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.033789 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.033829 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.033841 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.046371 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.050168 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.050201 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.050211 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.050228 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.050241 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.068136 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.068255 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.070346 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.070407 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.070419 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.070433 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.070444 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.173378 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.173436 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.173447 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.173463 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.173474 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.277269 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.277329 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.277346 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.277394 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.277412 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.379991 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.380033 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.380042 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.380054 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.380064 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.482548 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.482596 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.482608 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.482685 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.482699 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.585554 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.585606 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.585615 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.585629 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.585639 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.688444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.688508 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.688525 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.688545 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.688558 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.790710 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.790762 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.790773 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.790794 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.790806 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.894234 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.894277 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.894286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.894301 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.894312 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.903436 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:57:26.521882037 +0000 UTC Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.913886 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.913922 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.914080 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.914236 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.914500 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:12 crc kubenswrapper[4890]: E0121 15:33:12.914710 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.997286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.997376 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.997398 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.997423 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:12 crc kubenswrapper[4890]: I0121 15:33:12.997442 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:12Z","lastTransitionTime":"2026-01-21T15:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.100826 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.100861 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.100870 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.100885 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.100900 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.203317 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.203394 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.203404 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.203422 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.203431 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.306055 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.306115 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.306133 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.306154 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.306171 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.409421 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.409458 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.409467 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.409481 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.409491 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.513073 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.513132 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.513149 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.513170 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.513184 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.618810 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.618914 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.618945 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.618981 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.619016 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.722155 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.722194 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.722203 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.722221 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.722231 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.825452 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.825512 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.825527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.825546 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.825558 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.903869 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:53:54.828178834 +0000 UTC Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.913248 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:13 crc kubenswrapper[4890]: E0121 15:33:13.913451 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.928947 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.929011 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.929021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.929042 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:13 crc kubenswrapper[4890]: I0121 15:33:13.929055 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:13Z","lastTransitionTime":"2026-01-21T15:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.031411 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.031462 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.031477 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.031502 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.031521 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.134696 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.134751 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.134763 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.134788 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.134803 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.237159 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.237206 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.237217 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.237234 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.237245 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.339708 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.339764 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.339782 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.339806 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.339827 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.443034 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.443110 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.443125 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.443152 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.443184 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.545668 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.545721 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.545735 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.545763 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.545775 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.648429 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.648470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.648481 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.648499 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.648511 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.751045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.751072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.751079 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.751092 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.751101 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.852863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.852903 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.852921 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.852937 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.852947 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.904739 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:27:49.023848954 +0000 UTC Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.913293 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:14 crc kubenswrapper[4890]: E0121 15:33:14.913537 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.913740 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.913818 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:14 crc kubenswrapper[4890]: E0121 15:33:14.913982 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:14 crc kubenswrapper[4890]: E0121 15:33:14.914116 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.956197 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.956263 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.956282 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.956311 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:14 crc kubenswrapper[4890]: I0121 15:33:14.956330 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:14Z","lastTransitionTime":"2026-01-21T15:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.059265 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.059324 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.059337 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.059378 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.059408 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.162430 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.162505 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.162525 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.162559 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.162581 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.265899 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.265959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.265982 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.266012 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.266036 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.369589 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.369688 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.369715 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.369752 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.369770 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.472798 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.472835 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.472846 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.472863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.472876 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.575879 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.575921 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.575930 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.575944 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.575954 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.682595 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.682648 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.682711 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.682735 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.682762 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.786730 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.786789 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.786811 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.786839 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.786860 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.890064 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.890130 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.890141 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.890160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.890174 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.905689 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:04:39.201566752 +0000 UTC Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.914108 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:15 crc kubenswrapper[4890]: E0121 15:33:15.914453 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.992426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.992501 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.992527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.992560 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:15 crc kubenswrapper[4890]: I0121 15:33:15.992584 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:15Z","lastTransitionTime":"2026-01-21T15:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.095100 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.095136 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.095146 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.095163 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.095174 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.198272 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.198311 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.198360 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.198378 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.198391 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.301445 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.301512 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.301525 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.301540 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.301553 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.404219 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.404261 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.404274 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.404292 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.404304 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.507827 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.507905 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.507927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.507954 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.507974 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.611451 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.611512 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.611524 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.611543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.611555 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.714817 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.714882 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.714902 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.714928 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.714946 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.817267 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.817304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.817316 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.817333 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.817345 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.906753 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:52:50.209150766 +0000 UTC Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.914075 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.914141 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.914146 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:16 crc kubenswrapper[4890]: E0121 15:33:16.914245 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:16 crc kubenswrapper[4890]: E0121 15:33:16.914438 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:16 crc kubenswrapper[4890]: E0121 15:33:16.914541 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.915581 4890 scope.go:117] "RemoveContainer" containerID="962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.919996 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.920044 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.920066 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.920090 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:16 crc kubenswrapper[4890]: I0121 15:33:16.920113 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:16Z","lastTransitionTime":"2026-01-21T15:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.022394 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.022452 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.022471 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.022496 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.022513 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.125569 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.125640 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.125656 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.125681 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.125697 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.228027 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.228073 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.228083 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.228099 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.228110 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.330878 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.330959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.330980 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.331006 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.331025 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.433214 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.433249 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.433260 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.433275 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.433287 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.535192 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.535270 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.535286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.535311 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.535334 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.638396 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.638433 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.638444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.638461 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.638473 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.658572 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/2.log" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.671616 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.672127 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.691096 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.709781 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.722949 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.733656 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.743828 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.744769 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.744803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.744811 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.744828 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.744840 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.757153 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.765898 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.777515 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.790065 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.803619 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.819403 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.834339 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.846917 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.846967 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.846978 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.846993 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.847003 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.848548 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.860127 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.869548 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1462f01-5bca-4532-a218-b1e897c2bde3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.882301 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.901785 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.907368 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:32:53.450087374 +0000 UTC Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.913828 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:17 crc kubenswrapper[4890]: E0121 15:33:17.913999 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.921815 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.933321 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.946218 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.948835 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.948869 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.948877 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.948890 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.948899 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:17Z","lastTransitionTime":"2026-01-21T15:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.959984 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.970336 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.980093 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.988823 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1462f01-5bca-4532-a218-b1e897c2bde3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:17 crc kubenswrapper[4890]: I0121 15:33:17.998809 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.020129 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.044275 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.051549 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.051619 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.051633 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.051655 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.051668 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.057305 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.072007 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.088458 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.100802 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.112787 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.125537 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.140893 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.152137 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.154565 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.154636 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.154648 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.154662 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.154671 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.165163 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.181308 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.198485 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.258510 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.258573 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.258582 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.258598 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.258608 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.360400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.360454 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.360472 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.360494 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.360510 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: E0121 15:33:18.394416 4890 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d5dcae_8e63_4910_9a28_4f6a5b2d427f.slice/crio-conmon-a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.463346 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.463453 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.463470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.463487 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.463500 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.566530 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.566597 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.566609 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.566637 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.566653 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.669929 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.670003 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.670021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.670045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.670063 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.677536 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/3.log" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.678323 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/2.log" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.681780 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8" exitCode=1 Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.681833 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.681878 4890 scope.go:117] "RemoveContainer" containerID="962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.682602 4890 scope.go:117] "RemoveContainer" containerID="a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8" Jan 21 15:33:18 crc kubenswrapper[4890]: E0121 15:33:18.682779 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.706297 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.717207 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.732496 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.753785 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.770591 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.773159 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.773189 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.773200 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.773221 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.773231 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.791948 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.804437 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.818381 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.829176 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.844304 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.855396 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1462f01-5bca-4532-a218-b1e897c2bde3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.868579 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.879258 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.879320 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.879332 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.879381 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.879393 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.885939 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.898424 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.907683 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:01:14.564720831 +0000 UTC Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.913059 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.913075 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.913170 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.913237 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:18 crc kubenswrapper[4890]: E0121 15:33:18.913185 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:18 crc kubenswrapper[4890]: E0121 15:33:18.913385 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:18 crc kubenswrapper[4890]: E0121 15:33:18.913474 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.926607 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.944567 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.957855 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.976580 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://962649f2a0b00883e2aa8e47626be5fbce6d045e0a669ecd59ad7d1e68fbd7a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:32:44Z\\\",\\\"message\\\":\\\"\\\\nI0121 15:32:44.059409 6525 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:32:44.061179 6525 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:32:44.061242 6525 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 15:32:44.061253 6525 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 15:32:44.061278 6525 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 15:32:44.061291 6525 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:32:44.061302 6525 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 15:32:44.061534 6525 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:32:44.061546 6525 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:32:44.062385 6525 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:32:44.062406 6525 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:32:44.062432 6525 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:32:44.062436 6525 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:32:44.062453 6525 factory.go:656] Stopping watch factory\\\\nI0121 15:32:44.062468 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:32:44.062498 6525 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:18Z\\\",\\\"message\\\":\\\":18.046082 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rp8lm\\\\nI0121 15:33:18.046086 6946 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qnlzh in node crc\\\\nF0121 15:33:18.046089 6946 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:33:18.046101 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-j9mfr\\\\nI0121 15:33:18.046102 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:33:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.981508 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.981554 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.981564 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.981580 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:18 crc kubenswrapper[4890]: I0121 15:33:18.981593 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:18Z","lastTransitionTime":"2026-01-21T15:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.083999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.084072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.084083 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.084098 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.084109 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.186908 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.186946 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.186957 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.186973 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.186983 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.289400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.289451 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.289468 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.289492 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.289510 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.392412 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.392461 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.392473 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.392490 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.392501 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.494579 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.494623 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.494634 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.494650 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.494663 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.597527 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.597569 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.597579 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.597593 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.597602 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.686237 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/3.log" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.690730 4890 scope.go:117] "RemoveContainer" containerID="a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8" Jan 21 15:33:19 crc kubenswrapper[4890]: E0121 15:33:19.690965 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.699798 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.699872 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.699892 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.699917 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.699935 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.710115 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.729968 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.750642 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.762416 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.776325 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.792791 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1462f01-5bca-4532-a218-b1e897c2bde3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.803029 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.803077 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.803090 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.803108 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.803119 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.811501 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.828879 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.847070 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.864652 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.881944 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.908639 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:46:44.151786523 +0000 UTC Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.913233 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.913287 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.913334 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.913379 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.913403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.913421 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:19Z","lastTransitionTime":"2026-01-21T15:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:19 crc kubenswrapper[4890]: E0121 15:33:19.913800 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.925837 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.949817 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.980616 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:18Z\\\",\\\"message\\\":\\\":18.046082 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rp8lm\\\\nI0121 15:33:18.046086 6946 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qnlzh in node crc\\\\nF0121 15:33:18.046089 6946 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:33:18.046101 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-j9mfr\\\\nI0121 15:33:18.046102 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:33:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:19 crc kubenswrapper[4890]: I0121 15:33:19.992170 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.002153 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.013725 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.015438 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.015490 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.015513 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.015531 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.015545 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.030938 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.043390 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:20Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.117969 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.118040 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.118061 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.118089 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.118109 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.220946 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.220999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.221011 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.221027 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.221039 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.323864 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.323895 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.323903 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.323916 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.323925 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.426179 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.426230 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.426241 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.426255 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.426266 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.528792 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.528850 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.528867 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.528889 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.528906 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.632237 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.632284 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.632294 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.632312 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.632322 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.735184 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.735229 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.735240 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.735257 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.735270 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.837873 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.837940 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.837952 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.837989 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.838006 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.840716 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.840841 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.840886 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.840924 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.840959 4890 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841070 4890 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841103 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.841068654 +0000 UTC m=+147.202511093 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841137 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.841121035 +0000 UTC m=+147.202563474 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841166 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841210 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841236 4890 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841325 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.841302739 +0000 UTC m=+147.202745188 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841578 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841617 4890 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841630 4890 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.841698 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.841679388 +0000 UTC m=+147.203121887 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.909012 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:52:22.23413894 +0000 UTC Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.913267 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.913277 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.913406 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.913448 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.913531 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.913616 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.939884 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.939939 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.940072 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.940095 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.940111 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:20Z","lastTransitionTime":"2026-01-21T15:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:20 crc kubenswrapper[4890]: I0121 15:33:20.941984 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:33:20 crc kubenswrapper[4890]: E0121 15:33:20.942229 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.942203022 +0000 UTC m=+147.303645471 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.044312 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.044426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.044452 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.044483 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.044506 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.148047 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.148101 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.148118 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.148142 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.148160 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.250842 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.250897 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.250920 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.250942 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.250958 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.353317 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.353373 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.353386 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.353401 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.353412 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.456220 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.456251 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.456260 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.456274 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.456285 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.559106 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.559188 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.559223 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.559268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.559292 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.661633 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.661688 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.661700 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.661718 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.661730 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.764331 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.764405 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.764417 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.764435 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.764447 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.867140 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.867200 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.867211 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.867227 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.867239 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.910204 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:01:36.631305393 +0000 UTC Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.913588 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:21 crc kubenswrapper[4890]: E0121 15:33:21.913845 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.970205 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.970257 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.970268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.970286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:21 crc kubenswrapper[4890]: I0121 15:33:21.970298 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:21Z","lastTransitionTime":"2026-01-21T15:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.073087 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.073128 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.073139 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.073156 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.073167 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.176190 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.176241 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.176253 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.176270 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.176283 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.278943 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.278980 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.278993 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.279009 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.279018 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.347741 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.347866 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.347880 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.347893 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.347902 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.367852 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.371964 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.371989 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.371998 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.372019 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.372075 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.386403 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.391007 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.391043 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.391051 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.391064 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.391073 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.402398 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.407102 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.407152 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.407169 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.407191 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.407208 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.425498 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.429450 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.429536 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.429584 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.429608 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.429627 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.446485 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:22Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.446747 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.448669 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.448720 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.448729 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.448745 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.448754 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.551788 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.551834 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.551870 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.551892 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.551905 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.655288 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.655342 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.655395 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.655421 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.655439 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.758909 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.759006 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.759023 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.759049 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.759100 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.861987 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.862031 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.862059 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.862075 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.862085 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.910672 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:21:09.007089976 +0000 UTC Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.914062 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.914114 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.914212 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.914441 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.914770 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:22 crc kubenswrapper[4890]: E0121 15:33:22.914989 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.964207 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.964246 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.964264 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.964280 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:22 crc kubenswrapper[4890]: I0121 15:33:22.964291 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:22Z","lastTransitionTime":"2026-01-21T15:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.065771 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.065842 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.065851 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.065868 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.065879 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.169031 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.169094 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.169109 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.169135 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.169152 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.271847 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.271948 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.271974 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.271999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.272017 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.375557 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.375594 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.375610 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.375627 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.375639 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.477947 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.477992 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.478011 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.478029 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.478040 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.580544 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.580667 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.580694 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.580719 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.580736 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.683307 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.683416 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.683442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.683471 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.683493 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.786512 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.786559 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.786570 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.786584 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.786595 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.889602 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.889658 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.889677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.889702 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.889720 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.911063 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:06:26.636205537 +0000 UTC Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.913582 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:23 crc kubenswrapper[4890]: E0121 15:33:23.913753 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.992544 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.992593 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.992609 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.992635 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:23 crc kubenswrapper[4890]: I0121 15:33:23.992657 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:23Z","lastTransitionTime":"2026-01-21T15:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.095808 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.095869 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.095887 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.095910 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.095927 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.198188 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.198266 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.198284 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.198307 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.198323 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.301732 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.301791 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.301812 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.301839 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.301858 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.404875 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.404926 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.404942 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.404964 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.404980 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.507201 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.507252 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.507264 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.507283 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.507297 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.610689 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.610782 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.610808 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.610842 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.610864 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.712797 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.712845 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.712859 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.712878 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.712893 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.814999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.815034 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.815045 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.815061 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.815072 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.911808 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:29:17.864862871 +0000 UTC Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.913102 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.913184 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.913332 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:24 crc kubenswrapper[4890]: E0121 15:33:24.913517 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:24 crc kubenswrapper[4890]: E0121 15:33:24.913631 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:24 crc kubenswrapper[4890]: E0121 15:33:24.913690 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.918028 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.918065 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.918074 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.918088 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:24 crc kubenswrapper[4890]: I0121 15:33:24.918098 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:24Z","lastTransitionTime":"2026-01-21T15:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.020702 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.020747 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.020758 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.020776 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.020789 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.124554 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.124883 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.125041 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.125204 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.125334 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.228542 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.228605 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.228620 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.228640 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.228654 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.332679 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.332709 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.332719 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.332732 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.332742 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.435025 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.435101 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.435120 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.435142 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.435159 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.537498 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.537577 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.537610 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.537640 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.537661 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.639844 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.639898 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.639915 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.639938 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.639954 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.741863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.741919 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.741937 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.741959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.741978 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.845042 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.845121 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.845131 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.845152 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.845164 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.912190 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:28:47.33946451 +0000 UTC Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.913764 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:25 crc kubenswrapper[4890]: E0121 15:33:25.913961 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.948262 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.948308 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.948319 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.948336 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:25 crc kubenswrapper[4890]: I0121 15:33:25.948347 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:25Z","lastTransitionTime":"2026-01-21T15:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.050542 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.050593 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.050604 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.050619 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.050629 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.153897 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.153971 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.153984 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.154005 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.154021 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.257427 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.257505 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.257529 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.257559 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.257582 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.360611 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.360675 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.360713 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.360761 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.360800 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.462607 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.462657 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.462669 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.462687 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.462698 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.565477 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.565523 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.565534 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.565551 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.565562 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.667443 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.667476 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.667484 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.667497 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.667505 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.769090 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.769204 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.769215 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.769230 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.769241 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.870907 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.870948 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.870959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.870971 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.870981 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.912559 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:20:44.559001348 +0000 UTC Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.913866 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.913890 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.913866 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:26 crc kubenswrapper[4890]: E0121 15:33:26.913990 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:26 crc kubenswrapper[4890]: E0121 15:33:26.914030 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:26 crc kubenswrapper[4890]: E0121 15:33:26.914074 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.973564 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.973699 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.973717 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.973736 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:26 crc kubenswrapper[4890]: I0121 15:33:26.973748 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:26Z","lastTransitionTime":"2026-01-21T15:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.076899 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.076976 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.077015 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.077037 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.077052 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.180765 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.180809 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.180820 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.180837 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.180850 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.283842 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.283870 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.283880 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.283895 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.283904 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.386877 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.386927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.386942 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.386965 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.386981 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.489367 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.489405 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.489429 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.489446 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.489457 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.591724 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.591765 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.591776 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.591791 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.591804 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.694250 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.694286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.694304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.694321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.694333 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.797286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.797342 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.797381 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.797404 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.797421 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.900343 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.900426 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.900438 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.900462 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.900473 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:27Z","lastTransitionTime":"2026-01-21T15:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.913024 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:09:17.038514983 +0000 UTC Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.913282 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:27 crc kubenswrapper[4890]: E0121 15:33:27.913454 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.942170 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.957845 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.979107 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:27 crc kubenswrapper[4890]: I0121 15:33:27.994566 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:27Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.002626 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.002662 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.002672 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.002687 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.002719 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.008522 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.027460 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.040596 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1462f01-5bca-4532-a218-b1e897c2bde3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.058157 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.072434 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.085936 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.098523 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.104863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.104894 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.104906 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.104921 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.104932 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.121773 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.137484 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.158254 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:18Z\\\",\\\"message\\\":\\\":18.046082 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rp8lm\\\\nI0121 15:33:18.046086 6946 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qnlzh in node crc\\\\nF0121 15:33:18.046089 6946 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:33:18.046101 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-j9mfr\\\\nI0121 15:33:18.046102 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:33:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.173992 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.188989 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.207645 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.207694 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.207708 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.207727 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.207740 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.213535 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.230919 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.244461 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.310160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.310199 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.310210 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.310229 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.310242 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.412296 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.412414 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.412456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.412487 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.412508 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.515584 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.515659 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.515677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.515701 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.515717 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.617930 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.617986 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.617999 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.618015 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.618024 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.720567 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.720616 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.720629 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.720648 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.720660 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.823423 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.823483 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.823500 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.823524 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.823542 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.913563 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:14:05.896538742 +0000 UTC Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.913723 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.913726 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.913832 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:28 crc kubenswrapper[4890]: E0121 15:33:28.913933 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:28 crc kubenswrapper[4890]: E0121 15:33:28.914064 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:28 crc kubenswrapper[4890]: E0121 15:33:28.914137 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.925766 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.925844 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.925876 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.925902 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:28 crc kubenswrapper[4890]: I0121 15:33:28.925924 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:28Z","lastTransitionTime":"2026-01-21T15:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.028035 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.028073 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.028086 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.028103 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.028116 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.130953 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.131021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.131042 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.131067 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.131084 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.233874 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.233917 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.233930 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.233946 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.233957 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.336403 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.336489 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.336508 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.336529 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.336581 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.438802 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.438836 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.438845 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.438860 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.438869 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.541690 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.541770 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.541802 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.541831 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.541860 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.645431 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.645530 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.645564 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.645581 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.645592 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.748543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.748593 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.748603 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.748619 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.748630 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.850645 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.850714 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.850727 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.850745 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.850758 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.913731 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:00:16.547447996 +0000 UTC Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.913935 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:29 crc kubenswrapper[4890]: E0121 15:33:29.914068 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.953669 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.953710 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.953720 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.953740 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:29 crc kubenswrapper[4890]: I0121 15:33:29.953752 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:29Z","lastTransitionTime":"2026-01-21T15:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.057304 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.057537 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.057564 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.057600 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.057628 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.164618 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.164682 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.164696 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.165192 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.165238 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.268275 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.268319 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.268330 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.268345 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.268374 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.371567 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.371640 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.371654 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.371681 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.371697 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.474110 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.474151 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.474160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.474176 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.474185 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.577145 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.577194 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.577207 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.577231 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.577272 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.680438 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.680507 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.680521 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.680568 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.680584 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.783632 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.783702 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.783729 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.783762 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.783784 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.886400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.886465 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.886483 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.886507 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.886525 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.914138 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.914131 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:17:06.227417429 +0000 UTC Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.914143 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.914549 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:30 crc kubenswrapper[4890]: E0121 15:33:30.914659 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:30 crc kubenswrapper[4890]: E0121 15:33:30.914764 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:30 crc kubenswrapper[4890]: E0121 15:33:30.914906 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.990121 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.990152 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.990160 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.990174 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:30 crc kubenswrapper[4890]: I0121 15:33:30.990183 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:30Z","lastTransitionTime":"2026-01-21T15:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.093690 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.093761 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.093779 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.093803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.093822 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.196773 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.196845 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.196886 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.196922 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.196950 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.299786 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.299848 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.299865 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.299889 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.299907 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.402676 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.402712 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.402721 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.402735 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.402746 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.505938 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.505973 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.505984 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.506001 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.506010 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.609162 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.609224 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.609243 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.609269 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.609285 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.712877 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.712935 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.712949 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.712977 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.712992 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.815750 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.815806 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.815820 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.815840 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.815857 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.913626 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:31 crc kubenswrapper[4890]: E0121 15:33:31.913991 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.914321 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:57:18.732152686 +0000 UTC Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.918337 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.918406 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.918423 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.918442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:31 crc kubenswrapper[4890]: I0121 15:33:31.918455 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:31Z","lastTransitionTime":"2026-01-21T15:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.021050 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.021100 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.021112 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.021132 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.021143 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.124485 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.124556 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.124582 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.124612 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.124635 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.227126 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.227199 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.227213 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.227270 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.227286 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.330594 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.330647 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.330664 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.330688 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.330701 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.434834 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.434879 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.434891 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.434910 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.434925 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.538083 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.538122 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.538131 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.538151 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.538163 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.646404 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.646461 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.646473 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.646493 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.646506 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.662851 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.668423 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.668494 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.668504 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.668526 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.668537 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.685194 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.690121 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.690170 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.690187 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.690214 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.690229 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.706118 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.711396 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.711442 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.711452 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.711470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.711483 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.728324 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.733785 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.733844 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.733862 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.733885 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.733898 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.752502 4890 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4ce3cfc-d802-4b98-b49d-ffcfc53fa20b\\\",\\\"systemUUID\\\":\\\"18a17417-1572-4a09-b67d-6fcf4ac1275e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.752679 4890 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.754290 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.754314 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.754323 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.754339 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.754365 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.857916 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.857991 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.858004 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.858021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.858033 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.913162 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.914044 4890 scope.go:117] "RemoveContainer" containerID="a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8" Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.914251 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.914409 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.914449 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.914478 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:18:02.284939533 +0000 UTC Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.914569 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.914647 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:32 crc kubenswrapper[4890]: E0121 15:33:32.914745 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.960470 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.960495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.960503 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.960515 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:32 crc kubenswrapper[4890]: I0121 15:33:32.960544 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:32Z","lastTransitionTime":"2026-01-21T15:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.063725 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.063806 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.063814 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.063829 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.063839 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.168262 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.168306 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.168318 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.168332 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.168340 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.272194 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.272577 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.272757 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.272895 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.273002 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.376189 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.376236 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.376249 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.376268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.376280 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.479444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.479494 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.479505 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.479523 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.479536 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.582408 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.582514 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.582535 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.582563 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.582583 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.685223 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.685313 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.685402 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.685436 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.685460 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.789998 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.790074 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.790091 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.790119 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.790141 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.892914 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.892964 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.892975 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.892995 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.893009 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.914217 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:33 crc kubenswrapper[4890]: E0121 15:33:33.914438 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.914594 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:31:59.425757746 +0000 UTC Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.995251 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.995317 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.995332 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.995370 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:33 crc kubenswrapper[4890]: I0121 15:33:33.995383 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:33Z","lastTransitionTime":"2026-01-21T15:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.099024 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.099088 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.099105 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.099127 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.099144 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.202070 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.202134 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.202151 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.202173 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.202190 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.305812 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.305911 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.305932 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.305955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.305973 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.408919 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.408972 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.408986 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.409006 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.409020 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.512594 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.512630 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.512641 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.512657 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.512669 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.615463 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.615506 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.615516 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.615530 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.615540 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.689523 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:34 crc kubenswrapper[4890]: E0121 15:33:34.689718 4890 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:33:34 crc kubenswrapper[4890]: E0121 15:33:34.689801 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs podName:a86abbe4-e7c5-4a3e-a8d7-02d82267ded6 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:38.689778051 +0000 UTC m=+161.051220490 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs") pod "network-metrics-daemon-j9mfr" (UID: "a86abbe4-e7c5-4a3e-a8d7-02d82267ded6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.718669 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.718762 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.718784 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.718823 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.718851 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.823047 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.823110 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.823130 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.823157 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.823176 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.913264 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.913454 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.913499 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:34 crc kubenswrapper[4890]: E0121 15:33:34.913648 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:34 crc kubenswrapper[4890]: E0121 15:33:34.913819 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:34 crc kubenswrapper[4890]: E0121 15:33:34.913937 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.915300 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:05:56.747372486 +0000 UTC Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.926241 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.926282 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.926293 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.926338 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:34 crc kubenswrapper[4890]: I0121 15:33:34.926363 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:34Z","lastTransitionTime":"2026-01-21T15:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.029171 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.029257 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.029277 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.029301 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.029318 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.132324 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.132389 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.132405 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.132425 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.132440 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.236173 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.236252 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.236273 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.236301 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.236321 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.340057 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.340127 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.340151 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.340175 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.340193 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.443468 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.443541 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.443567 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.443598 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.443619 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.545935 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.545991 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.546005 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.546021 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.546035 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.648303 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.648340 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.648386 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.648400 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.648411 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.750460 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.750489 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.750519 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.750533 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.750541 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.853189 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.853244 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.853254 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.853272 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.853281 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.913408 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:35 crc kubenswrapper[4890]: E0121 15:33:35.914004 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.915470 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:10:05.017377215 +0000 UTC Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.956444 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.956515 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.956533 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.956560 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:35 crc kubenswrapper[4890]: I0121 15:33:35.956577 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:35Z","lastTransitionTime":"2026-01-21T15:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.059435 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.059495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.059514 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.059541 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.059559 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.162325 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.162414 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.162432 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.162456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.162478 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.265248 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.265307 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.265318 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.265335 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.265376 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.368544 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.368600 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.368621 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.368647 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.368664 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.471757 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.471802 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.471947 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.472132 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.472143 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.575251 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.575319 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.575338 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.575402 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.575423 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.678814 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.678874 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.678891 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.678916 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.678933 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.781184 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.781245 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.781263 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.781286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.781303 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.884721 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.884785 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.884805 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.884829 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.884847 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.913441 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.913534 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:36 crc kubenswrapper[4890]: E0121 15:33:36.913684 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.913441 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:36 crc kubenswrapper[4890]: E0121 15:33:36.913918 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:36 crc kubenswrapper[4890]: E0121 15:33:36.914076 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.915648 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:28:58.986050204 +0000 UTC Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.987788 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.987887 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.987903 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.987927 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:36 crc kubenswrapper[4890]: I0121 15:33:36.987945 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:36Z","lastTransitionTime":"2026-01-21T15:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.090754 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.090808 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.090816 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.090833 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.090843 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.193209 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.193264 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.193275 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.193292 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.193302 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.296142 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.296184 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.296192 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.296208 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.296219 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.398570 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.398608 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.398619 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.398636 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.398648 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.502158 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.502224 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.502243 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.502268 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.502284 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.604939 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.605030 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.605057 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.605086 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.605109 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.707412 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.707456 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.707467 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.707486 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.707499 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.810194 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.810269 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.810292 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.810321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.810379 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.913108 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.913249 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.913294 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.913306 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.913321 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:37 crc kubenswrapper[4890]: E0121 15:33:37.913313 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.913332 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:37Z","lastTransitionTime":"2026-01-21T15:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.916397 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:59:31.390088893 +0000 UTC Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.937966 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"224c1249-09e8-480d-b924-ac297d8738f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14089ddcd247dfea0a4c0cebec8d2b9d517e75c9d2e80834a5154b38aaad59e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1597f494bfdc7f8461578d60686f720e9fdf46fbccb610f84e38c5d2bc452e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ed9434a44a2b0f6e0e05b85260b1738abf01570ebb6152fd5b77c4060e5485b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61418fddb7a4c56de6b8702f12e57015d1d87e663b2181b6fc9aa8d6da375e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6628d437c675d0f642805e2bff79915a9daef97e39574969cedf66856e54b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbddedbf83735f60b1cb2f8ebcfa38a686594c596ffca2d6281c317a5054c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ed8f98c5806601e4069dfcea77fa509178ae12888088f6d9ddc22f0c7fdfef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eaac91c516d13f0218bf87401f483591256caf6dda64a25e90b8268bf071d04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.953288 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aaff44d-46c5-4ba7-aaf8-0bca46c4e620\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9afdd0ab2aef119407ecb83a73c404add0bfc3f20388bd03b1442131771417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbb9449b18aacbe5b0f8bdeb6f4a0b672cb1d65d5b0b34f16a743d81dc2137b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8126fa221410763c7c44f7fc1a33e376d13fb0f7c9f6268e6250396cc283b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55a19facbe33e2087a8588b42b529fcceb72c7ca0ce39d73a6bebe57acac3f07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:37 crc kubenswrapper[4890]: I0121 15:33:37.984834 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:18Z\\\",\\\"message\\\":\\\":18.046082 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rp8lm\\\\nI0121 15:33:18.046086 6946 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qnlzh in node crc\\\\nF0121 15:33:18.046089 6946 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:17Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:33:18.046101 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-j9mfr\\\\nI0121 15:33:18.046102 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:33:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7pqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rp8lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.008626 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.016205 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.016295 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.016309 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.016334 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.016373 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.029414 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vrb68" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a9b5f1-5b7a-48b3-b941-8255b14d809f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319175ed79079ae52c7a8b9b271e325714a3b90de5592223a7aff8a5e450f160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c85zh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vrb68\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.049247 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67047065-8bad-4e4d-8b91-47e7ee72ffb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174cf661228bae7bed24e2e703e31d0230aac6c18ab9997dd8e455176097f996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8pq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qnlzh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.067956 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-msckx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9260bc10-0bda-4046-9b76-78b103f176be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c696651722099022983832dd102095f2ed9136358c0eeceec2827f203f12ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://793882d1d8242e0148ad3acb629217b5910faa7ffaf8e21a85f9201ef3705444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb91aaf476b59c169926816ec0b3ee494f104b6a83e742fbd4b6b513dc700bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ce4c6d1336a203f89849671c66ce7ed34e3500524afcb6ef4185781bc92c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6359f78dc26e0dd25cf1a58bcb689b3f4b3b9442c8e4cc229448caa1669fdfc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b00ba33527758ec37f0f61bca3ac0e1b56ad0c3207b2384ee65aad95ed8539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e3d971b426a6cd7da3bf8da225c45c7299c5c186fd48291285629a7d10598b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nvmgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-msckx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.081909 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3278cad5-c53a-400a-9d2d-22a98bda2773\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d4c1034ea32971d1172465f4ad692d8a8aa0776d1feba00a451b749b6c941a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22df52a6e533448589304720151dc3833176fb29c7da74544e0f7247818cc012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42ftf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nzzdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.103599 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64a6fe1-2ef4-4fbb-9cd1-e6a232644494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 15:32:10.323881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:32:10.325999 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3543149802/tls.crt::/tmp/serving-cert-3543149802/tls.key\\\\\\\"\\\\nI0121 15:32:16.253940 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:32:16.267938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:32:16.267981 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:32:16.268027 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:32:16.268035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:32:16.275226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:32:16.275343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:32:16.275426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:32:16.275450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:32:16.275488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:32:16.275513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:32:16.275720 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:32:16.278283 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.118535 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.118596 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.118618 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.118647 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.118669 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.121129 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b48846ef5bee8a1e104e9b803a3f21662ae1a14b69ec7ce83cfc1a0f1d920bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.137094 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pflt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba30f20-e5ad-4888-850d-1715115ab8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:33:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:33:03Z\\\",\\\"message\\\":\\\"2026-01-21T15:32:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9\\\\n2026-01-21T15:32:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f096ce84-c6cb-450a-8e35-c8c8860ceaa9 to /host/opt/cni/bin/\\\\n2026-01-21T15:32:18Z [verbose] multus-daemon started\\\\n2026-01-21T15:32:18Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:33:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58ncx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pflt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.151198 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-twcft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc746ac-6844-4a76-a68d-ff79281e1561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0d42c842319f6470c27a77de01788eb08eecc7d02c6db7a676c23074b7cbb6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pjtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-twcft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.162016 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m7gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:32:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j9mfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.179112 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26242cc43e402d0bd3137040b94b89aedda28f604a692f91c7da01303166ef9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.190916 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1462f01-5bca-4532-a218-b1e897c2bde3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dec3c6ab3524fe62b68cbd9a0d85055c81972dc18663c7b3ee01d9899335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://073998ef48bb85643fa3d31f7d7f1db081fb1e88be6e1543f0e38b64cbf71d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.203974 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0781f9b6-dd05-4e5f-85ca-09bf5adad978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2221bef6c50948e3feb2c962d35f09953114b2ca201f063b36a667075a4ab1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4459251d799a73ad697ee2988bfc81903ddf4e9571b16884715caf1f5ae8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d5b96c31d7ce905e8bbaca08fbe83f0fcf795570ab22df30cf48791336c178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:31:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.217619 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.221657 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.221703 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.221718 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.221737 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.221778 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.232338 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.246786 4890 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:32:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7c3c4e8910c6df04baa59f1b7b5f0cc4aee1397ceb90f2fcda0e53d84e3495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2477c1421062ecfea4ae3336b54fc7750c54ba663dd8703c6a45bbf1df84a457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:33:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.324779 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.324840 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.324852 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.324870 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.324882 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.428877 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.428924 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.428938 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.428955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.428968 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.531887 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.531944 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.531959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.531981 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.531995 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.634078 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.634146 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.634164 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.634190 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.634207 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.737666 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.737730 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.737752 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.737782 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.737806 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.841322 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.841408 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.841425 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.841474 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.841491 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.913458 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.913581 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:38 crc kubenswrapper[4890]: E0121 15:33:38.913640 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.913495 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:38 crc kubenswrapper[4890]: E0121 15:33:38.913728 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:38 crc kubenswrapper[4890]: E0121 15:33:38.913976 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.917327 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:28:54.436774319 +0000 UTC Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.944875 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.944922 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.944931 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.944945 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:38 crc kubenswrapper[4890]: I0121 15:33:38.944955 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:38Z","lastTransitionTime":"2026-01-21T15:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.048449 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.048523 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.048545 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.048575 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.048598 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.152320 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.152443 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.152467 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.152500 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.152524 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.255838 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.255910 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.255931 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.255959 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.255982 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.358440 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.358495 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.358507 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.358526 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.358538 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.460803 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.460851 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.460863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.460881 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.460893 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.562850 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.562887 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.562895 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.562908 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.562918 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.666146 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.666199 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.666221 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.666251 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.666272 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.768931 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.768967 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.768979 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.768994 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.769005 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.871786 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.871844 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.871861 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.871885 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.871902 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.913617 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:39 crc kubenswrapper[4890]: E0121 15:33:39.914014 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.918127 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:53:08.482663583 +0000 UTC Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.974884 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.974955 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.974976 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.974998 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:39 crc kubenswrapper[4890]: I0121 15:33:39.975017 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:39Z","lastTransitionTime":"2026-01-21T15:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.077727 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.077777 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.077791 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.077816 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.077829 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.180337 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.180411 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.180422 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.180441 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.180453 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.283677 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.283808 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.283834 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.283863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.283881 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.387937 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.387980 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.387988 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.388002 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.388014 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.490113 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.490159 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.490168 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.490181 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.490190 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.593372 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.593410 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.593420 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.593435 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.593445 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.696679 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.696732 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.696751 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.696776 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.696795 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.799221 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.799287 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.799310 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.799340 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.799398 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.902497 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.902550 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.902563 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.902580 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.902593 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:40Z","lastTransitionTime":"2026-01-21T15:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.914119 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.914180 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:40 crc kubenswrapper[4890]: E0121 15:33:40.914264 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.914119 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:40 crc kubenswrapper[4890]: E0121 15:33:40.914523 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:40 crc kubenswrapper[4890]: E0121 15:33:40.915133 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:40 crc kubenswrapper[4890]: I0121 15:33:40.919151 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:02:04.669041844 +0000 UTC Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.005101 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.005173 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.005192 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.005222 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.005241 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.108639 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.108684 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.108698 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.108715 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.108726 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.212006 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.212079 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.212104 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.212139 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.212178 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.315081 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.315158 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.315185 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.315218 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.315243 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.417778 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.417833 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.417844 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.417863 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.417875 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.520773 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.520841 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.520861 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.520884 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.520901 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.623768 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.623832 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.623850 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.623873 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.623892 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.726823 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.726962 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.726998 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.727030 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.727051 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.829526 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.829595 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.829618 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.829649 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.829674 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.913194 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:41 crc kubenswrapper[4890]: E0121 15:33:41.913543 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.919313 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:24:30.582779155 +0000 UTC Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.932516 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.932563 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.932578 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.932596 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:41 crc kubenswrapper[4890]: I0121 15:33:41.932609 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:41Z","lastTransitionTime":"2026-01-21T15:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.035347 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.035475 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.035492 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.035516 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.035533 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.138827 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.138898 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.138916 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.138944 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.138962 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.243385 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.243457 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.243480 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.243509 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.243531 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.346407 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.346484 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.346507 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.346538 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.346560 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.450033 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.450100 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.450119 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.450144 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.450161 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.552485 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.552543 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.552563 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.552587 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.552605 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.655167 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.655208 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.655220 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.655239 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.655250 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.758188 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.758266 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.758286 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.758315 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.758338 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.828006 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.828175 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.828201 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.828228 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.828250 4890 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:33:42Z","lastTransitionTime":"2026-01-21T15:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.891417 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn"] Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.891995 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.895086 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.895419 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.896755 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.897318 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.913880 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.913986 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.913880 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:42 crc kubenswrapper[4890]: E0121 15:33:42.914114 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:42 crc kubenswrapper[4890]: E0121 15:33:42.914230 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:42 crc kubenswrapper[4890]: E0121 15:33:42.914419 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.919796 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:16:07.698882913 +0000 UTC Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.919856 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.920970 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-pflt5" podStartSLOduration=86.920959513 podStartE2EDuration="1m26.920959513s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:42.920926022 +0000 UTC m=+105.282368431" watchObservedRunningTime="2026-01-21 15:33:42.920959513 +0000 UTC m=+105.282401922" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.928999 4890 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.947051 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-twcft" podStartSLOduration=86.947021051 podStartE2EDuration="1m26.947021051s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:42.932332887 +0000 UTC m=+105.293775296" watchObservedRunningTime="2026-01-21 15:33:42.947021051 +0000 UTC m=+105.308463460" Jan 21 15:33:42 crc kubenswrapper[4890]: I0121 15:33:42.981196 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.981165315 podStartE2EDuration="1m25.981165315s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:42.964436761 +0000 UTC m=+105.325879190" watchObservedRunningTime="2026-01-21 15:33:42.981165315 +0000 UTC m=+105.342607764" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.070392 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.070336945 podStartE2EDuration="1m25.070336945s" podCreationTimestamp="2026-01-21 15:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.070250893 +0000 UTC m=+105.431693322" watchObservedRunningTime="2026-01-21 15:33:43.070336945 +0000 UTC m=+105.431779354" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.071184 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.071174475 podStartE2EDuration="37.071174475s" podCreationTimestamp="2026-01-21 15:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.053280104 +0000 UTC m=+105.414722503" watchObservedRunningTime="2026-01-21 15:33:43.071174475 +0000 UTC m=+105.432616884" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.080766 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f023b311-24f7-41ee-ab23-57133f47ce5d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.080821 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f023b311-24f7-41ee-ab23-57133f47ce5d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.080880 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f023b311-24f7-41ee-ab23-57133f47ce5d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.080909 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f023b311-24f7-41ee-ab23-57133f47ce5d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.080975 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f023b311-24f7-41ee-ab23-57133f47ce5d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.134717 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.134695637 podStartE2EDuration="1m24.134695637s" podCreationTimestamp="2026-01-21 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.133658432 +0000 UTC m=+105.495100891" watchObservedRunningTime="2026-01-21 15:33:43.134695637 +0000 UTC m=+105.496138076" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.148388 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=53.148368406 podStartE2EDuration="53.148368406s" podCreationTimestamp="2026-01-21 15:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.147988967 +0000 UTC m=+105.509431376" watchObservedRunningTime="2026-01-21 15:33:43.148368406 +0000 UTC m=+105.509810825" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.162927 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podStartSLOduration=87.162897337 podStartE2EDuration="1m27.162897337s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.162037706 +0000 UTC m=+105.523480135" watchObservedRunningTime="2026-01-21 15:33:43.162897337 +0000 UTC m=+105.524339746" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.182324 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f023b311-24f7-41ee-ab23-57133f47ce5d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.182409 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f023b311-24f7-41ee-ab23-57133f47ce5d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.182433 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f023b311-24f7-41ee-ab23-57133f47ce5d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.182441 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f023b311-24f7-41ee-ab23-57133f47ce5d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.182483 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f023b311-24f7-41ee-ab23-57133f47ce5d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.182506 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f023b311-24f7-41ee-ab23-57133f47ce5d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.182810 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f023b311-24f7-41ee-ab23-57133f47ce5d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.183822 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f023b311-24f7-41ee-ab23-57133f47ce5d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.190981 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f023b311-24f7-41ee-ab23-57133f47ce5d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.200012 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nzzdz" podStartSLOduration=86.199992381 podStartE2EDuration="1m26.199992381s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.199266704 +0000 UTC m=+105.560709113" watchObservedRunningTime="2026-01-21 15:33:43.199992381 +0000 UTC m=+105.561434790" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.201111 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-msckx" podStartSLOduration=87.201103168 podStartE2EDuration="1m27.201103168s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.184873827 +0000 UTC m=+105.546316246" watchObservedRunningTime="2026-01-21 15:33:43.201103168 +0000 UTC m=+105.562545587" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.206551 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f023b311-24f7-41ee-ab23-57133f47ce5d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mvrpn\" (UID: \"f023b311-24f7-41ee-ab23-57133f47ce5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.211611 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.237030 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vrb68" podStartSLOduration=87.237011164 podStartE2EDuration="1m27.237011164s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.236416859 +0000 UTC m=+105.597859268" watchObservedRunningTime="2026-01-21 15:33:43.237011164 +0000 UTC m=+105.598453563" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.770325 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" event={"ID":"f023b311-24f7-41ee-ab23-57133f47ce5d","Type":"ContainerStarted","Data":"8ad73b8f3b70fda578dfa6c7888b650375d049edf4b9592626ebda0d6b56df49"} Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.770442 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" event={"ID":"f023b311-24f7-41ee-ab23-57133f47ce5d","Type":"ContainerStarted","Data":"c643200b32f146bcd110fafd5a03940ac6540af5bdb5372be978e1c0427b1aef"} Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.788279 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mvrpn" podStartSLOduration=87.788255576 podStartE2EDuration="1m27.788255576s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:33:43.787716703 +0000 UTC m=+106.149159112" watchObservedRunningTime="2026-01-21 15:33:43.788255576 +0000 UTC m=+106.149698025" Jan 21 15:33:43 crc kubenswrapper[4890]: I0121 15:33:43.913267 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:43 crc kubenswrapper[4890]: E0121 15:33:43.913763 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:44 crc kubenswrapper[4890]: I0121 15:33:44.913122 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:44 crc kubenswrapper[4890]: I0121 15:33:44.913198 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:44 crc kubenswrapper[4890]: I0121 15:33:44.913123 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:44 crc kubenswrapper[4890]: E0121 15:33:44.913269 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:44 crc kubenswrapper[4890]: E0121 15:33:44.913369 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:44 crc kubenswrapper[4890]: E0121 15:33:44.913469 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:45 crc kubenswrapper[4890]: I0121 15:33:45.913999 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:45 crc kubenswrapper[4890]: E0121 15:33:45.914564 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:46 crc kubenswrapper[4890]: I0121 15:33:46.913521 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:46 crc kubenswrapper[4890]: I0121 15:33:46.913559 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:46 crc kubenswrapper[4890]: E0121 15:33:46.913680 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:46 crc kubenswrapper[4890]: I0121 15:33:46.913765 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:46 crc kubenswrapper[4890]: E0121 15:33:46.913910 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:46 crc kubenswrapper[4890]: E0121 15:33:46.913953 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:47 crc kubenswrapper[4890]: I0121 15:33:47.913433 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:47 crc kubenswrapper[4890]: E0121 15:33:47.916022 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:47 crc kubenswrapper[4890]: I0121 15:33:47.918925 4890 scope.go:117] "RemoveContainer" containerID="a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8" Jan 21 15:33:47 crc kubenswrapper[4890]: E0121 15:33:47.919237 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rp8lm_openshift-ovn-kubernetes(86d5dcae-8e63-4910-9a28-4f6a5b2d427f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" Jan 21 15:33:48 crc kubenswrapper[4890]: I0121 15:33:48.913834 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:48 crc kubenswrapper[4890]: I0121 15:33:48.913846 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:48 crc kubenswrapper[4890]: I0121 15:33:48.913876 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:48 crc kubenswrapper[4890]: E0121 15:33:48.914069 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:48 crc kubenswrapper[4890]: E0121 15:33:48.914168 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:48 crc kubenswrapper[4890]: E0121 15:33:48.914521 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:49 crc kubenswrapper[4890]: I0121 15:33:49.792589 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/1.log" Jan 21 15:33:49 crc kubenswrapper[4890]: I0121 15:33:49.793126 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/0.log" Jan 21 15:33:49 crc kubenswrapper[4890]: I0121 15:33:49.793182 4890 generic.go:334] "Generic (PLEG): container finished" podID="eba30f20-e5ad-4888-850d-1715115ab8bd" containerID="68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50" exitCode=1 Jan 21 15:33:49 crc kubenswrapper[4890]: I0121 15:33:49.793225 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerDied","Data":"68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50"} Jan 21 15:33:49 crc kubenswrapper[4890]: I0121 15:33:49.793298 4890 scope.go:117] "RemoveContainer" containerID="e878cbfb43634d8ff131ba021a56395fe0e7f4ff59dd56b5b280c0b1f91775d7" Jan 21 15:33:49 crc kubenswrapper[4890]: I0121 15:33:49.794861 4890 scope.go:117] "RemoveContainer" containerID="68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50" Jan 21 15:33:49 crc kubenswrapper[4890]: E0121 15:33:49.795600 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-pflt5_openshift-multus(eba30f20-e5ad-4888-850d-1715115ab8bd)\"" pod="openshift-multus/multus-pflt5" podUID="eba30f20-e5ad-4888-850d-1715115ab8bd" Jan 21 15:33:49 crc kubenswrapper[4890]: I0121 15:33:49.913769 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:49 crc kubenswrapper[4890]: E0121 15:33:49.913908 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:50 crc kubenswrapper[4890]: I0121 15:33:50.798331 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/1.log" Jan 21 15:33:50 crc kubenswrapper[4890]: I0121 15:33:50.913389 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:50 crc kubenswrapper[4890]: I0121 15:33:50.913448 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:50 crc kubenswrapper[4890]: I0121 15:33:50.913506 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:50 crc kubenswrapper[4890]: E0121 15:33:50.913689 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:50 crc kubenswrapper[4890]: E0121 15:33:50.913831 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:50 crc kubenswrapper[4890]: E0121 15:33:50.914000 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:51 crc kubenswrapper[4890]: I0121 15:33:51.913606 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:51 crc kubenswrapper[4890]: E0121 15:33:51.913890 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:52 crc kubenswrapper[4890]: I0121 15:33:52.913807 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:52 crc kubenswrapper[4890]: I0121 15:33:52.913868 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:52 crc kubenswrapper[4890]: I0121 15:33:52.913821 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:52 crc kubenswrapper[4890]: E0121 15:33:52.913998 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:52 crc kubenswrapper[4890]: E0121 15:33:52.914114 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:52 crc kubenswrapper[4890]: E0121 15:33:52.914203 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:53 crc kubenswrapper[4890]: I0121 15:33:53.914119 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:53 crc kubenswrapper[4890]: E0121 15:33:53.914274 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:54 crc kubenswrapper[4890]: I0121 15:33:54.913267 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:54 crc kubenswrapper[4890]: I0121 15:33:54.913344 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:54 crc kubenswrapper[4890]: I0121 15:33:54.913267 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:54 crc kubenswrapper[4890]: E0121 15:33:54.913486 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:54 crc kubenswrapper[4890]: E0121 15:33:54.913584 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:54 crc kubenswrapper[4890]: E0121 15:33:54.913638 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:55 crc kubenswrapper[4890]: I0121 15:33:55.913915 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:55 crc kubenswrapper[4890]: E0121 15:33:55.914153 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:56 crc kubenswrapper[4890]: I0121 15:33:56.914119 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:56 crc kubenswrapper[4890]: E0121 15:33:56.914267 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:56 crc kubenswrapper[4890]: I0121 15:33:56.914365 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:56 crc kubenswrapper[4890]: I0121 15:33:56.914338 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:56 crc kubenswrapper[4890]: E0121 15:33:56.914502 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:56 crc kubenswrapper[4890]: E0121 15:33:56.914722 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:57 crc kubenswrapper[4890]: I0121 15:33:57.914159 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:57 crc kubenswrapper[4890]: E0121 15:33:57.914277 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:33:57 crc kubenswrapper[4890]: E0121 15:33:57.919184 4890 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 15:33:58 crc kubenswrapper[4890]: E0121 15:33:58.050886 4890 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:33:58 crc kubenswrapper[4890]: I0121 15:33:58.913503 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:33:58 crc kubenswrapper[4890]: I0121 15:33:58.913528 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:33:58 crc kubenswrapper[4890]: E0121 15:33:58.914079 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:33:58 crc kubenswrapper[4890]: I0121 15:33:58.913640 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:33:58 crc kubenswrapper[4890]: E0121 15:33:58.914237 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:33:58 crc kubenswrapper[4890]: E0121 15:33:58.914581 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:33:59 crc kubenswrapper[4890]: I0121 15:33:59.913322 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:33:59 crc kubenswrapper[4890]: E0121 15:33:59.913572 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:34:00 crc kubenswrapper[4890]: I0121 15:34:00.914130 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:00 crc kubenswrapper[4890]: I0121 15:34:00.914346 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:00 crc kubenswrapper[4890]: I0121 15:34:00.914573 4890 scope.go:117] "RemoveContainer" containerID="68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50" Jan 21 15:34:00 crc kubenswrapper[4890]: E0121 15:34:00.914718 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:34:00 crc kubenswrapper[4890]: I0121 15:34:00.915143 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:00 crc kubenswrapper[4890]: E0121 15:34:00.915655 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:34:00 crc kubenswrapper[4890]: E0121 15:34:00.915802 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:34:00 crc kubenswrapper[4890]: I0121 15:34:00.917724 4890 scope.go:117] "RemoveContainer" containerID="a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8" Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.814592 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j9mfr"] Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.814777 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:01 crc kubenswrapper[4890]: E0121 15:34:01.814946 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.837483 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/1.log" Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.837859 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerStarted","Data":"c5ab0cadc8ae9b2a5654460dcd503ca706de3d4bf65487b20e0f6393e55f00e6"} Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.840512 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/3.log" Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.843653 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerStarted","Data":"bd3bfcadff93dd0ae59b2f2fe1e4993c6b7ab057555f7a7201932eb3cd4c60cb"} Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.844274 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:34:01 crc kubenswrapper[4890]: I0121 15:34:01.895517 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podStartSLOduration=105.895496963 podStartE2EDuration="1m45.895496963s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:01.895008192 +0000 UTC m=+124.256450611" watchObservedRunningTime="2026-01-21 15:34:01.895496963 +0000 UTC m=+124.256939372" Jan 21 15:34:02 crc kubenswrapper[4890]: I0121 15:34:02.913138 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:02 crc kubenswrapper[4890]: I0121 15:34:02.913130 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:02 crc kubenswrapper[4890]: E0121 15:34:02.913961 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:34:02 crc kubenswrapper[4890]: I0121 15:34:02.913167 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:02 crc kubenswrapper[4890]: E0121 15:34:02.914091 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:34:02 crc kubenswrapper[4890]: E0121 15:34:02.914205 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:34:03 crc kubenswrapper[4890]: E0121 15:34:03.052327 4890 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:34:03 crc kubenswrapper[4890]: I0121 15:34:03.914066 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:03 crc kubenswrapper[4890]: E0121 15:34:03.914230 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:34:04 crc kubenswrapper[4890]: I0121 15:34:04.913678 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:04 crc kubenswrapper[4890]: I0121 15:34:04.913678 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:04 crc kubenswrapper[4890]: E0121 15:34:04.913891 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:34:04 crc kubenswrapper[4890]: E0121 15:34:04.913985 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:34:04 crc kubenswrapper[4890]: I0121 15:34:04.913703 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:04 crc kubenswrapper[4890]: E0121 15:34:04.914100 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:34:05 crc kubenswrapper[4890]: I0121 15:34:05.913881 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:05 crc kubenswrapper[4890]: E0121 15:34:05.914054 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:34:06 crc kubenswrapper[4890]: I0121 15:34:06.913547 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:06 crc kubenswrapper[4890]: E0121 15:34:06.913772 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:34:06 crc kubenswrapper[4890]: I0121 15:34:06.913849 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:06 crc kubenswrapper[4890]: I0121 15:34:06.913923 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:06 crc kubenswrapper[4890]: E0121 15:34:06.914033 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:34:06 crc kubenswrapper[4890]: E0121 15:34:06.914199 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:34:07 crc kubenswrapper[4890]: I0121 15:34:07.913648 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:07 crc kubenswrapper[4890]: E0121 15:34:07.916015 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j9mfr" podUID="a86abbe4-e7c5-4a3e-a8d7-02d82267ded6" Jan 21 15:34:08 crc kubenswrapper[4890]: I0121 15:34:08.913607 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:08 crc kubenswrapper[4890]: I0121 15:34:08.913643 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:08 crc kubenswrapper[4890]: I0121 15:34:08.913681 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:08 crc kubenswrapper[4890]: I0121 15:34:08.916109 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:34:08 crc kubenswrapper[4890]: I0121 15:34:08.916161 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:34:08 crc kubenswrapper[4890]: I0121 15:34:08.916898 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:34:08 crc kubenswrapper[4890]: I0121 15:34:08.917661 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:34:09 crc kubenswrapper[4890]: I0121 15:34:09.913648 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:09 crc kubenswrapper[4890]: I0121 15:34:09.916534 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:34:09 crc kubenswrapper[4890]: I0121 15:34:09.917301 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.747987 4890 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.804421 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vd9wg"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.805910 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.806644 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfnt6"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.808012 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.808244 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.809193 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.809690 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxtfp"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.809827 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.810155 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7b8pk"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.810699 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7b8pk" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.810749 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.810993 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.812320 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.813214 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.816958 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.817455 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9pt8d"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.818044 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.818648 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.835503 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.848213 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.848682 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.848820 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.848937 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.849084 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.849211 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.849668 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.849688 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.849895 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.849938 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vq4s5"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.850042 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.850183 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.850238 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.850340 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.850343 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.851731 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwk8l"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.852269 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.854690 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.854741 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.854699 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.854882 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.855416 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.855460 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.855719 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.856486 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.856625 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.856731 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.856914 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.857216 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.863021 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.863749 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.864390 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.864403 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.865743 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hwfnn"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.866155 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.867393 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p86sf"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.870145 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.871214 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.871428 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.871566 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.871662 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.871768 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.871869 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.871960 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.872065 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.872216 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.872325 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.872458 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.872911 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.873009 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.873103 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.873162 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.873245 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.873390 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.873512 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.875735 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.876670 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.877041 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.877154 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.877294 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.877368 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.877838 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.877902 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.879571 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.880374 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.880665 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.881072 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.881387 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.881783 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.883739 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.884029 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.884566 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.885127 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.886147 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.886677 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.886865 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.886975 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.887666 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.887677 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.887953 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.894028 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.909814 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.910529 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.910725 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.911778 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.911879 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.911957 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.912027 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.912290 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.912319 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.913013 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.913532 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.913787 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.913815 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.914554 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vd9wg"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.914616 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.920189 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vkkcm"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.920863 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.922620 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.922975 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.923122 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.924629 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.924781 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.926209 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.926345 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.927757 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.928372 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.941831 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/41bfbbcf-1703-458d-a423-6b6beaa1611d-machine-approver-tls\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.941870 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-ca\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.941887 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-etcd-client\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.941902 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-audit-policies\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.941918 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41bfbbcf-1703-458d-a423-6b6beaa1611d-auth-proxy-config\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.941987 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xvj\" (UniqueName: \"kubernetes.io/projected/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-kube-api-access-h5xvj\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942005 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-metrics-tls\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942034 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-serving-cert\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942049 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcbcf\" (UniqueName: \"kubernetes.io/projected/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-kube-api-access-lcbcf\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942064 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-encryption-config\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942105 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942122 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/03a60911-f0d9-463b-b506-feb24e7c8c58-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vdfv8\" (UID: \"03a60911-f0d9-463b-b506-feb24e7c8c58\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942136 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-images\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942160 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-serving-cert\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942175 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-image-import-ca\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942190 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2dbf\" (UniqueName: \"kubernetes.io/projected/b77f292c-56d0-4593-a084-c807b6d723ff-kube-api-access-w2dbf\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942204 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-serving-cert\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942221 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942242 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942262 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-audit\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942281 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942301 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4dp4\" (UniqueName: \"kubernetes.io/projected/2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f-kube-api-access-l4dp4\") pod \"downloads-7954f5f757-7b8pk\" (UID: \"2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f\") " pod="openshift-console/downloads-7954f5f757-7b8pk" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942318 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-trusted-ca\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942337 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942368 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942372 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942385 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c6666c6-bfb9-4874-82b3-fcafc29121c1-audit-dir\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942832 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrx7\" (UniqueName: \"kubernetes.io/projected/2c6666c6-bfb9-4874-82b3-fcafc29121c1-kube-api-access-qrrx7\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942867 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942896 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-serving-cert\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942927 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-encryption-config\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942948 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-serving-cert\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.942968 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqrcv\" (UniqueName: \"kubernetes.io/projected/d5324902-a12c-492c-b66c-29c0b27d84cf-kube-api-access-mqrcv\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943023 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943057 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943080 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbbbh\" (UniqueName: \"kubernetes.io/projected/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-kube-api-access-bbbbh\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943103 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943131 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-config\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943154 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-config\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943172 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-dir\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943199 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c6666c6-bfb9-4874-82b3-fcafc29121c1-node-pullsecrets\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943220 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752wr\" (UniqueName: \"kubernetes.io/projected/41bfbbcf-1703-458d-a423-6b6beaa1611d-kube-api-access-752wr\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943241 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7md\" (UniqueName: \"kubernetes.io/projected/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-kube-api-access-6f7md\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943271 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-config\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943294 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-config\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943314 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-policies\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943338 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkm9\" (UniqueName: \"kubernetes.io/projected/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-kube-api-access-fkkm9\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943374 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b77f292c-56d0-4593-a084-c807b6d723ff-serving-cert\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943394 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-client\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943420 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-service-ca\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943441 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-client-ca\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943464 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943491 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxlq\" (UniqueName: \"kubernetes.io/projected/03a60911-f0d9-463b-b506-feb24e7c8c58-kube-api-access-fmxlq\") pod \"cluster-samples-operator-665b6dd947-vdfv8\" (UID: \"03a60911-f0d9-463b-b506-feb24e7c8c58\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943512 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-serving-cert\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943534 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943566 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943588 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943610 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-client-ca\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943633 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-config\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943654 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-etcd-client\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943675 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943694 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943723 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qgc\" (UniqueName: \"kubernetes.io/projected/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-kube-api-access-r5qgc\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943745 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943769 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943795 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943824 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-config\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943846 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12968d21-ebc2-42c6-9646-d377088401c4-audit-dir\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943870 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41bfbbcf-1703-458d-a423-6b6beaa1611d-config\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943892 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943915 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.943937 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6gx\" (UniqueName: \"kubernetes.io/projected/12968d21-ebc2-42c6-9646-d377088401c4-kube-api-access-2f6gx\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.947966 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.948032 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.951254 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.951688 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.952971 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.953450 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.954477 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.954650 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.955814 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.956284 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.956518 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-27xqq"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.956781 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.956979 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.957575 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.958129 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.970342 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.973526 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.973725 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.974449 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.974511 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.974924 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.978041 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.978895 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.979273 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8l24p"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.979649 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.980291 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.980724 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.981282 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.981815 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.982809 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.983282 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.983603 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.983764 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.983858 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.984179 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.986392 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.986771 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.987076 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.987190 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7znlr"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.987227 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.987546 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.987580 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.988341 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jv25z"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.988738 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.989388 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.989786 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.990607 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.991832 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.992058 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6qw59"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.994489 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.994606 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7b8pk"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.994611 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.994874 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pqdtm"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.995568 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.996019 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.996884 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.997003 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.999536 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt"] Jan 21 15:34:13 crc kubenswrapper[4890]: I0121 15:34:13.999967 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.000638 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.000964 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfnt6"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.001944 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hwfnn"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.002954 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9pt8d"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.004153 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.004886 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.005822 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kcs8m"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.006634 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.006870 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p86sf"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.007819 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.007915 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.009055 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vq4s5"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.011145 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.012306 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.013151 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.014180 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.015581 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.016549 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8l24p"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.017645 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vkkcm"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.018812 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxtfp"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.021311 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7znlr"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.021391 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jv25z"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.021924 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pqdtm"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.022916 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6qw59"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.024166 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.034813 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.035734 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.039468 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.039510 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.043337 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwk8l"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.044717 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-config\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.044758 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-config\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.044782 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.044862 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c6666c6-bfb9-4874-82b3-fcafc29121c1-node-pullsecrets\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.044807 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c6666c6-bfb9-4874-82b3-fcafc29121c1-node-pullsecrets\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.048978 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-752wr\" (UniqueName: \"kubernetes.io/projected/41bfbbcf-1703-458d-a423-6b6beaa1611d-kube-api-access-752wr\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049046 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-dir\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049092 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-config\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049120 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7md\" (UniqueName: \"kubernetes.io/projected/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-kube-api-access-6f7md\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049544 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-config\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049630 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-config\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049741 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b77f292c-56d0-4593-a084-c807b6d723ff-serving-cert\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049778 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-client\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049852 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-config\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049880 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-policies\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049946 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkm9\" (UniqueName: \"kubernetes.io/projected/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-kube-api-access-fkkm9\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.049971 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-service-ca\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050001 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-client-ca\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050034 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxlq\" (UniqueName: \"kubernetes.io/projected/03a60911-f0d9-463b-b506-feb24e7c8c58-kube-api-access-fmxlq\") pod \"cluster-samples-operator-665b6dd947-vdfv8\" (UID: \"03a60911-f0d9-463b-b506-feb24e7c8c58\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050061 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-serving-cert\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050083 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050118 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050140 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050168 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050192 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-config\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050214 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-etcd-client\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050242 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-client-ca\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050292 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050331 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050396 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5qgc\" (UniqueName: \"kubernetes.io/projected/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-kube-api-access-r5qgc\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050426 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050454 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050490 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050520 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-config\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050562 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12968d21-ebc2-42c6-9646-d377088401c4-audit-dir\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050590 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41bfbbcf-1703-458d-a423-6b6beaa1611d-config\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050650 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f6gx\" (UniqueName: \"kubernetes.io/projected/12968d21-ebc2-42c6-9646-d377088401c4-kube-api-access-2f6gx\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050694 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050725 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050762 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/41bfbbcf-1703-458d-a423-6b6beaa1611d-machine-approver-tls\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050803 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-ca\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050834 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-etcd-client\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050865 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41bfbbcf-1703-458d-a423-6b6beaa1611d-auth-proxy-config\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050877 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-dir\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050895 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5xvj\" (UniqueName: \"kubernetes.io/projected/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-kube-api-access-h5xvj\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050927 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-audit-policies\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.050959 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-metrics-tls\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051012 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-serving-cert\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051054 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/03a60911-f0d9-463b-b506-feb24e7c8c58-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vdfv8\" (UID: \"03a60911-f0d9-463b-b506-feb24e7c8c58\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051083 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-images\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051111 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcbcf\" (UniqueName: \"kubernetes.io/projected/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-kube-api-access-lcbcf\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051133 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-encryption-config\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051174 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051200 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-serving-cert\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051226 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051254 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-serving-cert\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051283 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-image-import-ca\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051306 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2dbf\" (UniqueName: \"kubernetes.io/projected/b77f292c-56d0-4593-a084-c807b6d723ff-kube-api-access-w2dbf\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051333 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051377 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-audit\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051401 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051428 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051454 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051481 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4dp4\" (UniqueName: \"kubernetes.io/projected/2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f-kube-api-access-l4dp4\") pod \"downloads-7954f5f757-7b8pk\" (UID: \"2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f\") " pod="openshift-console/downloads-7954f5f757-7b8pk" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051511 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-trusted-ca\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051576 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-service-ca\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051600 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-policies\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.052055 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-config\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.052368 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.052423 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wkqnl"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.053555 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.053932 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-config\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.054072 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.054908 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6jv4j"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055257 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-serving-cert\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.051580 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-serving-cert\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055367 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c6666c6-bfb9-4874-82b3-fcafc29121c1-audit-dir\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055399 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrx7\" (UniqueName: \"kubernetes.io/projected/2c6666c6-bfb9-4874-82b3-fcafc29121c1-kube-api-access-qrrx7\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055429 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055465 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqrcv\" (UniqueName: \"kubernetes.io/projected/d5324902-a12c-492c-b66c-29c0b27d84cf-kube-api-access-mqrcv\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055477 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055504 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-encryption-config\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055543 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-serving-cert\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055579 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055646 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055679 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbbbh\" (UniqueName: \"kubernetes.io/projected/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-kube-api-access-bbbbh\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055772 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055859 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-serving-cert\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.055875 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c6666c6-bfb9-4874-82b3-fcafc29121c1-audit-dir\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.056172 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-client-ca\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.056731 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.056762 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.056792 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-audit\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.057467 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-ca\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.058146 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b77f292c-56d0-4593-a084-c807b6d723ff-etcd-client\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.058598 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b77f292c-56d0-4593-a084-c807b6d723ff-serving-cert\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.058635 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/41bfbbcf-1703-458d-a423-6b6beaa1611d-machine-approver-tls\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.059118 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.059580 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-images\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.059684 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.060120 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.060241 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41bfbbcf-1703-458d-a423-6b6beaa1611d-auth-proxy-config\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.060296 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.060739 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-audit-policies\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.060873 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.061199 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.061564 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-serving-cert\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.061686 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.061743 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kcs8m"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.061758 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wkqnl"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.061932 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-image-import-ca\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.062974 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-trusted-ca\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.063375 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.063485 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.063686 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/03a60911-f0d9-463b-b506-feb24e7c8c58-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vdfv8\" (UID: \"03a60911-f0d9-463b-b506-feb24e7c8c58\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.063921 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.063940 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-config\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.064095 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12968d21-ebc2-42c6-9646-d377088401c4-audit-dir\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.064115 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-etcd-client\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.064436 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41bfbbcf-1703-458d-a423-6b6beaa1611d-config\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.064803 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-serving-cert\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.065009 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-config\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.065392 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-client-ca\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.065466 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12968d21-ebc2-42c6-9646-d377088401c4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.065484 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.065915 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.066288 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.066437 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-encryption-config\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.066877 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.067719 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-serving-cert\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.068032 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2c6666c6-bfb9-4874-82b3-fcafc29121c1-etcd-client\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.068125 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-metrics-tls\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.068178 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.068398 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.068690 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c6666c6-bfb9-4874-82b3-fcafc29121c1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.069018 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.069333 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12968d21-ebc2-42c6-9646-d377088401c4-encryption-config\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.069467 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.070198 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-serving-cert\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.070505 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.070841 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.071530 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jf26z"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.073192 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jf26z"] Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.073382 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.090289 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.127602 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.148630 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156381 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/566f28b1-744d-4cd6-b60a-f139a071579d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156441 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-service-ca\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156485 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-trusted-ca-bundle\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156534 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8gc\" (UniqueName: \"kubernetes.io/projected/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-kube-api-access-xc8gc\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156601 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/566f28b1-744d-4cd6-b60a-f139a071579d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156685 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-bound-sa-token\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156733 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-config\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156782 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-oauth-config\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156817 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-oauth-serving-cert\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156865 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156895 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxvv6\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-kube-api-access-vxvv6\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.156932 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-registry-certificates\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.157000 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-registry-tls\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.157031 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-serving-cert\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.157068 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-trusted-ca\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.157300 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:14.657276823 +0000 UTC m=+137.018719262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.168516 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.188621 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.208587 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.229015 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.248748 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.258410 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.258590 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:14.758565116 +0000 UTC m=+137.120007545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.258682 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8jq\" (UniqueName: \"kubernetes.io/projected/8897f3cf-e9a2-40ce-9353-018d197f47b1-kube-api-access-ht8jq\") pod \"control-plane-machine-set-operator-78cbb6b69f-fsz8b\" (UID: \"8897f3cf-e9a2-40ce-9353-018d197f47b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.258756 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fe9f9b1-5536-4457-ba0f-da6525f6672f-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.258819 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-registration-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.258849 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac9d1b61-10ee-40c7-9c98-b76dc5170609-serving-cert\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.258919 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4bfce20-6513-45e4-af9b-04a94fd694d1-config-volume\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259014 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-metrics-certs\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259042 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44cg\" (UniqueName: \"kubernetes.io/projected/e57c8cd2-2bce-4b03-abfd-65bdda351a79-kube-api-access-q44cg\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259077 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbf6\" (UniqueName: \"kubernetes.io/projected/10f686ac-45f7-4bf3-a087-b130d50f728c-kube-api-access-lqbf6\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259141 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-registry-certificates\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259214 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hw96\" (UniqueName: \"kubernetes.io/projected/4d079920-5e18-4261-a377-9c7311e4f1ef-kube-api-access-7hw96\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259249 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nsrv\" (UniqueName: \"kubernetes.io/projected/5e9fd499-def3-42f1-9b76-ecc733c90a9e-kube-api-access-8nsrv\") pod \"package-server-manager-789f6589d5-fqj99\" (UID: \"5e9fd499-def3-42f1-9b76-ecc733c90a9e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259321 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42de2336-707e-4de7-b753-bb4630b5798e-profile-collector-cert\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259646 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78166d97-60b0-4509-9725-19ab536a4ecd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259770 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9tff\" (UniqueName: \"kubernetes.io/projected/d8be7071-7d2a-492a-b511-be4ff4650873-kube-api-access-g9tff\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259848 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9wt\" (UniqueName: \"kubernetes.io/projected/a9a0c022-df6f-424e-baa1-9f5ed3593cde-kube-api-access-5d9wt\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.259959 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77aa25a1-a151-4f4a-99b1-11c1029c7278-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260034 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e57c8cd2-2bce-4b03-abfd-65bdda351a79-srv-cert\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260119 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-registry-tls\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260197 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-serving-cert\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260267 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42de2336-707e-4de7-b753-bb4630b5798e-srv-cert\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260417 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9fd499-def3-42f1-9b76-ecc733c90a9e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqj99\" (UID: \"5e9fd499-def3-42f1-9b76-ecc733c90a9e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260726 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10f686ac-45f7-4bf3-a087-b130d50f728c-proxy-tls\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260813 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26fb4317-1a94-43ee-a438-d86b8d5416de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260917 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10f686ac-45f7-4bf3-a087-b130d50f728c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.261010 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e57c8cd2-2bce-4b03-abfd-65bdda351a79-profile-collector-cert\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.261087 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7tj\" (UniqueName: \"kubernetes.io/projected/ac9d1b61-10ee-40c7-9c98-b76dc5170609-kube-api-access-8b7tj\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.261188 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lxd\" (UniqueName: \"kubernetes.io/projected/814df3ce-75a6-4d82-b9b6-90c0bfba740f-kube-api-access-c6lxd\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.261291 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20fae4bb-ed0b-412b-8c80-0f476ff9c381-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.261593 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-images\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262021 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2339d1ae-d929-4442-b2d4-6bcba1646748-cert\") pod \"ingress-canary-wkqnl\" (UID: \"2339d1ae-d929-4442-b2d4-6bcba1646748\") " pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262120 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9d1b61-10ee-40c7-9c98-b76dc5170609-config\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262374 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/566f28b1-744d-4cd6-b60a-f139a071579d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.260936 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-registry-certificates\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262750 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvxr\" (UniqueName: \"kubernetes.io/projected/4225fa07-37fd-4813-b101-8a2a4016c008-kube-api-access-txvxr\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262822 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78166d97-60b0-4509-9725-19ab536a4ecd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262853 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9a0c022-df6f-424e-baa1-9f5ed3593cde-proxy-tls\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262931 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-config\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262965 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814df3ce-75a6-4d82-b9b6-90c0bfba740f-config\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.262991 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-socket-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263012 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vkkcm\" (UID: \"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263033 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kfr4\" (UniqueName: \"kubernetes.io/projected/a1405f90-8def-4cc0-9024-278a208d9043-kube-api-access-9kfr4\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263228 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-bound-sa-token\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263463 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/220d28eb-0825-4916-bad8-ce74e82ab0b1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263550 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-certs\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263695 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/881242ee-5a63-4739-8285-ad7202079c20-signing-cabundle\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263824 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-config\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.263938 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8897f3cf-e9a2-40ce-9353-018d197f47b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fsz8b\" (UID: \"8897f3cf-e9a2-40ce-9353-018d197f47b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264040 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-oauth-config\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264088 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-oauth-serving-cert\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264150 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g89n\" (UniqueName: \"kubernetes.io/projected/2339d1ae-d929-4442-b2d4-6bcba1646748-kube-api-access-2g89n\") pod \"ingress-canary-wkqnl\" (UID: \"2339d1ae-d929-4442-b2d4-6bcba1646748\") " pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264187 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2bxf\" (UniqueName: \"kubernetes.io/projected/77aa25a1-a151-4f4a-99b1-11c1029c7278-kube-api-access-m2bxf\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264216 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-secret-volume\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264241 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec344157-3705-454c-ac40-9f649e481edf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264288 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvv6\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-kube-api-access-vxvv6\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264368 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264395 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdkfj\" (UniqueName: \"kubernetes.io/projected/fe52ad61-000f-4e87-b181-4484719b3593-kube-api-access-vdkfj\") pod \"migrator-59844c95c7-lrdjw\" (UID: \"fe52ad61-000f-4e87-b181-4484719b3593\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264434 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvngh\" (UniqueName: \"kubernetes.io/projected/c4bfce20-6513-45e4-af9b-04a94fd694d1-kube-api-access-hvngh\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264458 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-csi-data-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264485 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w7mz\" (UniqueName: \"kubernetes.io/projected/881242ee-5a63-4739-8285-ad7202079c20-kube-api-access-8w7mz\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264508 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fe9f9b1-5536-4457-ba0f-da6525f6672f-webhook-cert\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264557 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9457v\" (UniqueName: \"kubernetes.io/projected/78166d97-60b0-4509-9725-19ab536a4ecd-kube-api-access-9457v\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264580 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec344157-3705-454c-ac40-9f649e481edf-config\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264602 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-default-certificate\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264643 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-trusted-ca\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264681 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20fae4bb-ed0b-412b-8c80-0f476ff9c381-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264705 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1405f90-8def-4cc0-9024-278a208d9043-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264733 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-stats-auth\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264756 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26fb4317-1a94-43ee-a438-d86b8d5416de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264791 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28fv6\" (UniqueName: \"kubernetes.io/projected/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-kube-api-access-28fv6\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264813 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4fe9f9b1-5536-4457-ba0f-da6525f6672f-tmpfs\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264836 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26fb4317-1a94-43ee-a438-d86b8d5416de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264868 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20fae4bb-ed0b-412b-8c80-0f476ff9c381-config\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264916 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-registry-tls\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264954 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-oauth-serving-cert\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.264962 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/566f28b1-744d-4cd6-b60a-f139a071579d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.265104 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-service-ca\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.265145 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5np7g\" (UniqueName: \"kubernetes.io/projected/de85345a-34dc-48d4-9b2d-70006095c0e6-kube-api-access-5np7g\") pod \"dns-operator-744455d44c-pqdtm\" (UID: \"de85345a-34dc-48d4-9b2d-70006095c0e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.265172 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1405f90-8def-4cc0-9024-278a208d9043-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.265276 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:14.765263479 +0000 UTC m=+137.126705898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.265341 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-trusted-ca-bundle\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.265398 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/881242ee-5a63-4739-8285-ad7202079c20-signing-key\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.266089 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/566f28b1-744d-4cd6-b60a-f139a071579d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.267403 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-trusted-ca\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.267483 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-mountpoint-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.267809 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1405f90-8def-4cc0-9024-278a208d9043-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.269884 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-service-ca\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.269940 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.270208 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-trusted-ca-bundle\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.270725 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/566f28b1-744d-4cd6-b60a-f139a071579d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.271271 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-oauth-config\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.275436 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-config-volume\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.275602 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.275753 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc8gc\" (UniqueName: \"kubernetes.io/projected/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-kube-api-access-xc8gc\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276068 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4225fa07-37fd-4813-b101-8a2a4016c008-service-ca-bundle\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276124 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220d28eb-0825-4916-bad8-ce74e82ab0b1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276178 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec344157-3705-454c-ac40-9f649e481edf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276249 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcnm\" (UniqueName: \"kubernetes.io/projected/42de2336-707e-4de7-b753-bb4630b5798e-kube-api-access-xvcnm\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276372 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-plugins-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276463 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9lw\" (UniqueName: \"kubernetes.io/projected/220d28eb-0825-4916-bad8-ce74e82ab0b1-kube-api-access-cs9lw\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276506 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276540 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77aa25a1-a151-4f4a-99b1-11c1029c7278-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276602 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac9d1b61-10ee-40c7-9c98-b76dc5170609-trusted-ca\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276728 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmr5b\" (UniqueName: \"kubernetes.io/projected/2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a-kube-api-access-nmr5b\") pod \"multus-admission-controller-857f4d67dd-vkkcm\" (UID: \"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276836 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hsp\" (UniqueName: \"kubernetes.io/projected/6216fb1e-ffaf-478e-a533-36d1ff128b63-kube-api-access-69hsp\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276868 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c4bfce20-6513-45e4-af9b-04a94fd694d1-metrics-tls\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.276938 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de85345a-34dc-48d4-9b2d-70006095c0e6-metrics-tls\") pod \"dns-operator-744455d44c-pqdtm\" (UID: \"de85345a-34dc-48d4-9b2d-70006095c0e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.277117 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814df3ce-75a6-4d82-b9b6-90c0bfba740f-serving-cert\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.277251 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5b28\" (UniqueName: \"kubernetes.io/projected/4fe9f9b1-5536-4457-ba0f-da6525f6672f-kube-api-access-q5b28\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.277607 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.277638 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-node-bootstrap-token\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.279187 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-serving-cert\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.288651 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.308796 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.329044 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.348674 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.368090 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378594 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.378695 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:14.878673689 +0000 UTC m=+137.240116088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378741 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10f686ac-45f7-4bf3-a087-b130d50f728c-proxy-tls\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378765 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10f686ac-45f7-4bf3-a087-b130d50f728c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378786 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26fb4317-1a94-43ee-a438-d86b8d5416de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378818 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e57c8cd2-2bce-4b03-abfd-65bdda351a79-profile-collector-cert\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378841 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b7tj\" (UniqueName: \"kubernetes.io/projected/ac9d1b61-10ee-40c7-9c98-b76dc5170609-kube-api-access-8b7tj\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378862 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lxd\" (UniqueName: \"kubernetes.io/projected/814df3ce-75a6-4d82-b9b6-90c0bfba740f-kube-api-access-c6lxd\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378886 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20fae4bb-ed0b-412b-8c80-0f476ff9c381-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378914 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-images\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378936 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2339d1ae-d929-4442-b2d4-6bcba1646748-cert\") pod \"ingress-canary-wkqnl\" (UID: \"2339d1ae-d929-4442-b2d4-6bcba1646748\") " pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378954 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9d1b61-10ee-40c7-9c98-b76dc5170609-config\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378970 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9a0c022-df6f-424e-baa1-9f5ed3593cde-proxy-tls\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.378994 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvxr\" (UniqueName: \"kubernetes.io/projected/4225fa07-37fd-4813-b101-8a2a4016c008-kube-api-access-txvxr\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379012 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78166d97-60b0-4509-9725-19ab536a4ecd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379048 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vkkcm\" (UID: \"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379072 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kfr4\" (UniqueName: \"kubernetes.io/projected/a1405f90-8def-4cc0-9024-278a208d9043-kube-api-access-9kfr4\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379105 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814df3ce-75a6-4d82-b9b6-90c0bfba740f-config\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379122 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-socket-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379139 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-certs\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379161 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/220d28eb-0825-4916-bad8-ce74e82ab0b1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379177 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8897f3cf-e9a2-40ce-9353-018d197f47b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fsz8b\" (UID: \"8897f3cf-e9a2-40ce-9353-018d197f47b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379195 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/881242ee-5a63-4739-8285-ad7202079c20-signing-cabundle\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379226 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec344157-3705-454c-ac40-9f649e481edf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379255 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g89n\" (UniqueName: \"kubernetes.io/projected/2339d1ae-d929-4442-b2d4-6bcba1646748-kube-api-access-2g89n\") pod \"ingress-canary-wkqnl\" (UID: \"2339d1ae-d929-4442-b2d4-6bcba1646748\") " pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379276 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2bxf\" (UniqueName: \"kubernetes.io/projected/77aa25a1-a151-4f4a-99b1-11c1029c7278-kube-api-access-m2bxf\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379294 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-secret-volume\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379316 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379345 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvngh\" (UniqueName: \"kubernetes.io/projected/c4bfce20-6513-45e4-af9b-04a94fd694d1-kube-api-access-hvngh\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379414 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdkfj\" (UniqueName: \"kubernetes.io/projected/fe52ad61-000f-4e87-b181-4484719b3593-kube-api-access-vdkfj\") pod \"migrator-59844c95c7-lrdjw\" (UID: \"fe52ad61-000f-4e87-b181-4484719b3593\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379448 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-csi-data-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379474 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w7mz\" (UniqueName: \"kubernetes.io/projected/881242ee-5a63-4739-8285-ad7202079c20-kube-api-access-8w7mz\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379498 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fe9f9b1-5536-4457-ba0f-da6525f6672f-webhook-cert\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379525 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9457v\" (UniqueName: \"kubernetes.io/projected/78166d97-60b0-4509-9725-19ab536a4ecd-kube-api-access-9457v\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379545 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec344157-3705-454c-ac40-9f649e481edf-config\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379577 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-default-certificate\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379709 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-socket-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.379729 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10f686ac-45f7-4bf3-a087-b130d50f728c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380035 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-csi-data-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380172 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1405f90-8def-4cc0-9024-278a208d9043-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380234 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20fae4bb-ed0b-412b-8c80-0f476ff9c381-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380261 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26fb4317-1a94-43ee-a438-d86b8d5416de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380314 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-stats-auth\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380373 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28fv6\" (UniqueName: \"kubernetes.io/projected/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-kube-api-access-28fv6\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.380442 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:14.880392278 +0000 UTC m=+137.241834797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380478 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20fae4bb-ed0b-412b-8c80-0f476ff9c381-config\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380500 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4fe9f9b1-5536-4457-ba0f-da6525f6672f-tmpfs\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380524 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26fb4317-1a94-43ee-a438-d86b8d5416de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380549 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5np7g\" (UniqueName: \"kubernetes.io/projected/de85345a-34dc-48d4-9b2d-70006095c0e6-kube-api-access-5np7g\") pod \"dns-operator-744455d44c-pqdtm\" (UID: \"de85345a-34dc-48d4-9b2d-70006095c0e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380573 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1405f90-8def-4cc0-9024-278a208d9043-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380600 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/881242ee-5a63-4739-8285-ad7202079c20-signing-key\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380634 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-mountpoint-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380657 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1405f90-8def-4cc0-9024-278a208d9043-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380682 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-config-volume\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380704 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380736 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4225fa07-37fd-4813-b101-8a2a4016c008-service-ca-bundle\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380757 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220d28eb-0825-4916-bad8-ce74e82ab0b1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380785 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec344157-3705-454c-ac40-9f649e481edf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380802 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec344157-3705-454c-ac40-9f649e481edf-config\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380811 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-plugins-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380832 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9lw\" (UniqueName: \"kubernetes.io/projected/220d28eb-0825-4916-bad8-ce74e82ab0b1-kube-api-access-cs9lw\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380856 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvcnm\" (UniqueName: \"kubernetes.io/projected/42de2336-707e-4de7-b753-bb4630b5798e-kube-api-access-xvcnm\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380881 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380905 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmr5b\" (UniqueName: \"kubernetes.io/projected/2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a-kube-api-access-nmr5b\") pod \"multus-admission-controller-857f4d67dd-vkkcm\" (UID: \"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380929 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77aa25a1-a151-4f4a-99b1-11c1029c7278-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380952 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac9d1b61-10ee-40c7-9c98-b76dc5170609-trusted-ca\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380969 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4fe9f9b1-5536-4457-ba0f-da6525f6672f-tmpfs\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.380975 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hsp\" (UniqueName: \"kubernetes.io/projected/6216fb1e-ffaf-478e-a533-36d1ff128b63-kube-api-access-69hsp\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381096 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de85345a-34dc-48d4-9b2d-70006095c0e6-metrics-tls\") pod \"dns-operator-744455d44c-pqdtm\" (UID: \"de85345a-34dc-48d4-9b2d-70006095c0e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381124 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c4bfce20-6513-45e4-af9b-04a94fd694d1-metrics-tls\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381147 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814df3ce-75a6-4d82-b9b6-90c0bfba740f-serving-cert\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381171 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5b28\" (UniqueName: \"kubernetes.io/projected/4fe9f9b1-5536-4457-ba0f-da6525f6672f-kube-api-access-q5b28\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381215 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381238 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-node-bootstrap-token\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381260 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht8jq\" (UniqueName: \"kubernetes.io/projected/8897f3cf-e9a2-40ce-9353-018d197f47b1-kube-api-access-ht8jq\") pod \"control-plane-machine-set-operator-78cbb6b69f-fsz8b\" (UID: \"8897f3cf-e9a2-40ce-9353-018d197f47b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381280 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-registration-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381298 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac9d1b61-10ee-40c7-9c98-b76dc5170609-serving-cert\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381309 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1405f90-8def-4cc0-9024-278a208d9043-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381316 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fe9f9b1-5536-4457-ba0f-da6525f6672f-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381423 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4bfce20-6513-45e4-af9b-04a94fd694d1-config-volume\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381457 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbf6\" (UniqueName: \"kubernetes.io/projected/10f686ac-45f7-4bf3-a087-b130d50f728c-kube-api-access-lqbf6\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381482 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-metrics-certs\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381510 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44cg\" (UniqueName: \"kubernetes.io/projected/e57c8cd2-2bce-4b03-abfd-65bdda351a79-kube-api-access-q44cg\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381564 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nsrv\" (UniqueName: \"kubernetes.io/projected/5e9fd499-def3-42f1-9b76-ecc733c90a9e-kube-api-access-8nsrv\") pod \"package-server-manager-789f6589d5-fqj99\" (UID: \"5e9fd499-def3-42f1-9b76-ecc733c90a9e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381595 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hw96\" (UniqueName: \"kubernetes.io/projected/4d079920-5e18-4261-a377-9c7311e4f1ef-kube-api-access-7hw96\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381632 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42de2336-707e-4de7-b753-bb4630b5798e-profile-collector-cert\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381660 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78166d97-60b0-4509-9725-19ab536a4ecd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381686 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9tff\" (UniqueName: \"kubernetes.io/projected/d8be7071-7d2a-492a-b511-be4ff4650873-kube-api-access-g9tff\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.381912 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-mountpoint-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382196 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9wt\" (UniqueName: \"kubernetes.io/projected/a9a0c022-df6f-424e-baa1-9f5ed3593cde-kube-api-access-5d9wt\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382452 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-plugins-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382476 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382546 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77aa25a1-a151-4f4a-99b1-11c1029c7278-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382584 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e57c8cd2-2bce-4b03-abfd-65bdda351a79-srv-cert\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382611 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42de2336-707e-4de7-b753-bb4630b5798e-srv-cert\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382654 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9fd499-def3-42f1-9b76-ecc733c90a9e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqj99\" (UID: \"5e9fd499-def3-42f1-9b76-ecc733c90a9e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382780 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6216fb1e-ffaf-478e-a533-36d1ff128b63-registration-dir\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382820 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4225fa07-37fd-4813-b101-8a2a4016c008-service-ca-bundle\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.382988 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20fae4bb-ed0b-412b-8c80-0f476ff9c381-config\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.383189 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec344157-3705-454c-ac40-9f649e481edf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.384382 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vkkcm\" (UID: \"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.385030 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1405f90-8def-4cc0-9024-278a208d9043-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.385230 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-stats-auth\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.385530 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-default-certificate\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.385870 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20fae4bb-ed0b-412b-8c80-0f476ff9c381-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.388739 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.388842 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4225fa07-37fd-4813-b101-8a2a4016c008-metrics-certs\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.392231 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/220d28eb-0825-4916-bad8-ce74e82ab0b1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.408796 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.413764 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220d28eb-0825-4916-bad8-ce74e82ab0b1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.429477 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.448625 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.469100 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.472792 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78166d97-60b0-4509-9725-19ab536a4ecd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.483671 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.483820 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:14.983788069 +0000 UTC m=+137.345230478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.484295 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.484613 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:14.984605588 +0000 UTC m=+137.346047997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.487829 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.493582 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78166d97-60b0-4509-9725-19ab536a4ecd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.508673 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.527936 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.549387 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.577264 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.583652 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac9d1b61-10ee-40c7-9c98-b76dc5170609-trusted-ca\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.585509 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.586025 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.085914501 +0000 UTC m=+137.447356940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.587293 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.587825 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.087781773 +0000 UTC m=+137.449224182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.588747 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.590968 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9d1b61-10ee-40c7-9c98-b76dc5170609-config\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.609432 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.617828 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac9d1b61-10ee-40c7-9c98-b76dc5170609-serving-cert\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.629517 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.649248 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.669150 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.673689 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77aa25a1-a151-4f4a-99b1-11c1029c7278-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.689309 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.689549 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.689796 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.189718161 +0000 UTC m=+137.551160610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.690302 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.690872 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.190843197 +0000 UTC m=+137.552285646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.709021 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.716745 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77aa25a1-a151-4f4a-99b1-11c1029c7278-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.729425 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.749118 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.768339 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.773056 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26fb4317-1a94-43ee-a438-d86b8d5416de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.790180 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.793092 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.793380 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.293326417 +0000 UTC m=+137.654768856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.794280 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.794756 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.294738219 +0000 UTC m=+137.656180668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.808544 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.812227 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26fb4317-1a94-43ee-a438-d86b8d5416de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.829835 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.849087 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.869291 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.884971 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10f686ac-45f7-4bf3-a087-b130d50f728c-proxy-tls\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.889876 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.895344 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.895522 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.395491169 +0000 UTC m=+137.756933588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.896466 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.896885 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.396871991 +0000 UTC m=+137.758314410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.908513 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.916230 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fe9f9b1-5536-4457-ba0f-da6525f6672f-webhook-cert\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.916572 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fe9f9b1-5536-4457-ba0f-da6525f6672f-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.929589 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.949038 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.969124 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.987051 4890 request.go:700] Waited for 1.002706565s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dcollect-profiles-config&limit=500&resourceVersion=0 Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.989486 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.993779 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-config-volume\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.997533 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.998135 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.498068601 +0000 UTC m=+137.859511050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:14 crc kubenswrapper[4890]: I0121 15:34:14.999042 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:14 crc kubenswrapper[4890]: E0121 15:34:14.999607 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.499581386 +0000 UTC m=+137.861023855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.009000 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.014615 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e57c8cd2-2bce-4b03-abfd-65bdda351a79-profile-collector-cert\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.014686 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-secret-volume\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.017782 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42de2336-707e-4de7-b753-bb4630b5798e-profile-collector-cert\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.028710 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.037555 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e57c8cd2-2bce-4b03-abfd-65bdda351a79-srv-cert\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.049620 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.058197 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42de2336-707e-4de7-b753-bb4630b5798e-srv-cert\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.068796 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.088463 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.101084 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.101284 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.601259327 +0000 UTC m=+137.962701756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.102154 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.102830 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.602797052 +0000 UTC m=+137.964239501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.109081 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.116505 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.129248 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.161093 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.164771 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.169439 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.189594 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.191646 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814df3ce-75a6-4d82-b9b6-90c0bfba740f-config\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.203741 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.204083 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.704039914 +0000 UTC m=+138.065482363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.204333 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.204667 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.704652618 +0000 UTC m=+138.066095037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.208265 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.228743 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.237997 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814df3ce-75a6-4d82-b9b6-90c0bfba740f-serving-cert\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.248983 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.268888 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.275227 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8897f3cf-e9a2-40ce-9353-018d197f47b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fsz8b\" (UID: \"8897f3cf-e9a2-40ce-9353-018d197f47b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.289935 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.305950 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.306222 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.806185416 +0000 UTC m=+138.167627865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.306622 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.307052 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.807003805 +0000 UTC m=+138.168446294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.308104 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.328793 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.353314 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.368466 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380123 4890 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380214 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9a0c022-df6f-424e-baa1-9f5ed3593cde-proxy-tls podName:a9a0c022-df6f-424e-baa1-9f5ed3593cde nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.880188096 +0000 UTC m=+138.241630535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a9a0c022-df6f-424e-baa1-9f5ed3593cde-proxy-tls") pod "machine-config-operator-74547568cd-cmxm2" (UID: "a9a0c022-df6f-424e-baa1-9f5ed3593cde") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380259 4890 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380296 4890 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380405 4890 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380269 4890 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380406 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-images podName:a9a0c022-df6f-424e-baa1-9f5ed3593cde nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.880342129 +0000 UTC m=+138.241784648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-images") pod "machine-config-operator-74547568cd-cmxm2" (UID: "a9a0c022-df6f-424e-baa1-9f5ed3593cde") : failed to sync configmap cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380478 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-certs podName:4d079920-5e18-4261-a377-9c7311e4f1ef nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.880449262 +0000 UTC m=+138.241891711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-certs") pod "machine-config-server-6jv4j" (UID: "4d079920-5e18-4261-a377-9c7311e4f1ef") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380500 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2339d1ae-d929-4442-b2d4-6bcba1646748-cert podName:2339d1ae-d929-4442-b2d4-6bcba1646748 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.880489153 +0000 UTC m=+138.241931592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2339d1ae-d929-4442-b2d4-6bcba1646748-cert") pod "ingress-canary-wkqnl" (UID: "2339d1ae-d929-4442-b2d4-6bcba1646748") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.380526 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/881242ee-5a63-4739-8285-ad7202079c20-signing-cabundle podName:881242ee-5a63-4739-8285-ad7202079c20 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.880515523 +0000 UTC m=+138.241957962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/881242ee-5a63-4739-8285-ad7202079c20-signing-cabundle") pod "service-ca-9c57cc56f-6qw59" (UID: "881242ee-5a63-4739-8285-ad7202079c20") : failed to sync configmap cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.381669 4890 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.381685 4890 secret.go:188] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.381845 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4bfce20-6513-45e4-af9b-04a94fd694d1-metrics-tls podName:c4bfce20-6513-45e4-af9b-04a94fd694d1 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.881762272 +0000 UTC m=+138.243204711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c4bfce20-6513-45e4-af9b-04a94fd694d1-metrics-tls") pod "dns-default-jf26z" (UID: "c4bfce20-6513-45e4-af9b-04a94fd694d1") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.381925 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de85345a-34dc-48d4-9b2d-70006095c0e6-metrics-tls podName:de85345a-34dc-48d4-9b2d-70006095c0e6 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.881897315 +0000 UTC m=+138.243339804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/de85345a-34dc-48d4-9b2d-70006095c0e6-metrics-tls") pod "dns-operator-744455d44c-pqdtm" (UID: "de85345a-34dc-48d4-9b2d-70006095c0e6") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.381999 4890 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.382071 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/881242ee-5a63-4739-8285-ad7202079c20-signing-key podName:881242ee-5a63-4739-8285-ad7202079c20 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.882052468 +0000 UTC m=+138.243495027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/881242ee-5a63-4739-8285-ad7202079c20-signing-key") pod "service-ca-9c57cc56f-6qw59" (UID: "881242ee-5a63-4739-8285-ad7202079c20") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.382830 4890 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.382886 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4bfce20-6513-45e4-af9b-04a94fd694d1-config-volume podName:c4bfce20-6513-45e4-af9b-04a94fd694d1 nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.882871727 +0000 UTC m=+138.244314246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c4bfce20-6513-45e4-af9b-04a94fd694d1-config-volume") pod "dns-default-jf26z" (UID: "c4bfce20-6513-45e4-af9b-04a94fd694d1") : failed to sync configmap cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.382897 4890 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.382941 4890 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.382976 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e9fd499-def3-42f1-9b76-ecc733c90a9e-package-server-manager-serving-cert podName:5e9fd499-def3-42f1-9b76-ecc733c90a9e nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.882955819 +0000 UTC m=+138.244398268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/5e9fd499-def3-42f1-9b76-ecc733c90a9e-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-fqj99" (UID: "5e9fd499-def3-42f1-9b76-ecc733c90a9e") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.383021 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-node-bootstrap-token podName:4d079920-5e18-4261-a377-9c7311e4f1ef nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.88300029 +0000 UTC m=+138.244442809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-node-bootstrap-token") pod "machine-config-server-6jv4j" (UID: "4d079920-5e18-4261-a377-9c7311e4f1ef") : failed to sync secret cache: timed out waiting for the condition Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.390147 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.407853 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.409159 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:15.909137477 +0000 UTC m=+138.270579916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.409533 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.428212 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.449030 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.469150 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.489399 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.508649 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.510575 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.511169 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.011142536 +0000 UTC m=+138.372585055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.529169 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.548162 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.568463 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.590026 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.608630 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.612465 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.612674 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.112637003 +0000 UTC m=+138.474079462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.613448 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.613931 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.113908222 +0000 UTC m=+138.475350671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.628498 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.648609 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.669208 4890 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.715258 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxlq\" (UniqueName: \"kubernetes.io/projected/03a60911-f0d9-463b-b506-feb24e7c8c58-kube-api-access-fmxlq\") pod \"cluster-samples-operator-665b6dd947-vdfv8\" (UID: \"03a60911-f0d9-463b-b506-feb24e7c8c58\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.715579 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.715773 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.215743818 +0000 UTC m=+138.577186257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.716174 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.716653 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.216636708 +0000 UTC m=+138.578079157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.732656 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkm9\" (UniqueName: \"kubernetes.io/projected/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-kube-api-access-fkkm9\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.745533 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-752wr\" (UniqueName: \"kubernetes.io/projected/41bfbbcf-1703-458d-a423-6b6beaa1611d-kube-api-access-752wr\") pod \"machine-approver-56656f9798-zwlpl\" (UID: \"41bfbbcf-1703-458d-a423-6b6beaa1611d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.756505 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.766447 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7md\" (UniqueName: \"kubernetes.io/projected/513d9ec4-2b91-4609-ba1a-0e6f0b551d1a-kube-api-access-6f7md\") pod \"machine-api-operator-5694c8668f-vd9wg\" (UID: \"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.788497 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.807271 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.818221 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.818545 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.318517114 +0000 UTC m=+138.679959563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.820658 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.821117 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.321103053 +0000 UTC m=+138.682545482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.825922 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbbbh\" (UniqueName: \"kubernetes.io/projected/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-kube-api-access-bbbbh\") pod \"route-controller-manager-6576b87f9c-c9dkt\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:15 crc kubenswrapper[4890]: W0121 15:34:15.828046 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41bfbbcf_1703_458d_a423_6b6beaa1611d.slice/crio-2e484d4697d3afbb8dcafc6d5e97ad1679bf7d47dfeaa90a9bc8c06158489ea3 WatchSource:0}: Error finding container 2e484d4697d3afbb8dcafc6d5e97ad1679bf7d47dfeaa90a9bc8c06158489ea3: Status 404 returned error can't find the container with id 2e484d4697d3afbb8dcafc6d5e97ad1679bf7d47dfeaa90a9bc8c06158489ea3 Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.829137 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.871977 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrx7\" (UniqueName: \"kubernetes.io/projected/2c6666c6-bfb9-4874-82b3-fcafc29121c1-kube-api-access-qrrx7\") pod \"apiserver-76f77b778f-9pt8d\" (UID: \"2c6666c6-bfb9-4874-82b3-fcafc29121c1\") " pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.889139 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.893603 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5qgc\" (UniqueName: \"kubernetes.io/projected/9440b7c8-228d-452a-ba7e-ea7f3f8c0254-kube-api-access-r5qgc\") pod \"openshift-config-operator-7777fb866f-zrs8z\" (UID: \"9440b7c8-228d-452a-ba7e-ea7f3f8c0254\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.909521 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925038 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.925151 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.425123829 +0000 UTC m=+138.786566248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925333 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9fd499-def3-42f1-9b76-ecc733c90a9e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqj99\" (UID: \"5e9fd499-def3-42f1-9b76-ecc733c90a9e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925469 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-images\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925505 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2339d1ae-d929-4442-b2d4-6bcba1646748-cert\") pod \"ingress-canary-wkqnl\" (UID: \"2339d1ae-d929-4442-b2d4-6bcba1646748\") " pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925560 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9a0c022-df6f-424e-baa1-9f5ed3593cde-proxy-tls\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925661 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-certs\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925700 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/881242ee-5a63-4739-8285-ad7202079c20-signing-cabundle\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925771 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.925938 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/881242ee-5a63-4739-8285-ad7202079c20-signing-key\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.926074 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de85345a-34dc-48d4-9b2d-70006095c0e6-metrics-tls\") pod \"dns-operator-744455d44c-pqdtm\" (UID: \"de85345a-34dc-48d4-9b2d-70006095c0e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.926105 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c4bfce20-6513-45e4-af9b-04a94fd694d1-metrics-tls\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.926173 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-node-bootstrap-token\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.926228 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4bfce20-6513-45e4-af9b-04a94fd694d1-config-volume\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:15 crc kubenswrapper[4890]: E0121 15:34:15.926408 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.426384537 +0000 UTC m=+138.787826986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.927101 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9a0c022-df6f-424e-baa1-9f5ed3593cde-images\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.928971 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/881242ee-5a63-4739-8285-ad7202079c20-signing-cabundle\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.929891 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9fd499-def3-42f1-9b76-ecc733c90a9e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqj99\" (UID: \"5e9fd499-def3-42f1-9b76-ecc733c90a9e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.931243 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.931781 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9a0c022-df6f-424e-baa1-9f5ed3593cde-proxy-tls\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.932439 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de85345a-34dc-48d4-9b2d-70006095c0e6-metrics-tls\") pod \"dns-operator-744455d44c-pqdtm\" (UID: \"de85345a-34dc-48d4-9b2d-70006095c0e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.932489 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" event={"ID":"41bfbbcf-1703-458d-a423-6b6beaa1611d","Type":"ContainerStarted","Data":"2e484d4697d3afbb8dcafc6d5e97ad1679bf7d47dfeaa90a9bc8c06158489ea3"} Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.935632 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.935750 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-node-bootstrap-token\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.936644 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4d079920-5e18-4261-a377-9c7311e4f1ef-certs\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.937403 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/881242ee-5a63-4739-8285-ad7202079c20-signing-key\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.949379 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.961313 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2339d1ae-d929-4442-b2d4-6bcba1646748-cert\") pod \"ingress-canary-wkqnl\" (UID: \"2339d1ae-d929-4442-b2d4-6bcba1646748\") " pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.968390 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:34:15 crc kubenswrapper[4890]: I0121 15:34:15.998410 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.006896 4890 request.go:700] Waited for 1.945144384s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.011717 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5xvj\" (UniqueName: \"kubernetes.io/projected/f92adb1c-7d9e-411a-b2a2-2cfd918de6de-kube-api-access-h5xvj\") pod \"authentication-operator-69f744f599-gxtfp\" (UID: \"f92adb1c-7d9e-411a-b2a2-2cfd918de6de\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.026961 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.027134 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.527110157 +0000 UTC m=+138.888552576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.027479 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.027952 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.527942906 +0000 UTC m=+138.889385325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.028300 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.031996 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4dp4\" (UniqueName: \"kubernetes.io/projected/2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f-kube-api-access-l4dp4\") pod \"downloads-7954f5f757-7b8pk\" (UID: \"2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f\") " pod="openshift-console/downloads-7954f5f757-7b8pk" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.043870 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.055415 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqrcv\" (UniqueName: \"kubernetes.io/projected/d5324902-a12c-492c-b66c-29c0b27d84cf-kube-api-access-mqrcv\") pod \"oauth-openshift-558db77b4-mwk8l\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.071114 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2dbf\" (UniqueName: \"kubernetes.io/projected/b77f292c-56d0-4593-a084-c807b6d723ff-kube-api-access-w2dbf\") pod \"etcd-operator-b45778765-hwfnn\" (UID: \"b77f292c-56d0-4593-a084-c807b6d723ff\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.080637 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.086075 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcbcf\" (UniqueName: \"kubernetes.io/projected/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-kube-api-access-lcbcf\") pod \"controller-manager-879f6c89f-bfnt6\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.092812 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.111029 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f6gx\" (UniqueName: \"kubernetes.io/projected/12968d21-ebc2-42c6-9646-d377088401c4-kube-api-access-2f6gx\") pod \"apiserver-7bbb656c7d-svrdg\" (UID: \"12968d21-ebc2-42c6-9646-d377088401c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.121487 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.128733 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c188de-f8b1-46cc-8fe4-7c58c67f1e19-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8f6sd\" (UID: \"56c188de-f8b1-46cc-8fe4-7c58c67f1e19\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.128851 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.129038 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.129120 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.629100776 +0000 UTC m=+138.990543245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.130316 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.130894 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.630872207 +0000 UTC m=+138.992314676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.141510 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c4bfce20-6513-45e4-af9b-04a94fd694d1-metrics-tls\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.144798 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.150265 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.157826 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.172681 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.178239 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4bfce20-6513-45e4-af9b-04a94fd694d1-config-volume\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.209821 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-bound-sa-token\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.226615 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvv6\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-kube-api-access-vxvv6\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.235189 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.235781 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.236175 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.736152431 +0000 UTC m=+139.097594840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.253020 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.258053 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc8gc\" (UniqueName: \"kubernetes.io/projected/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-kube-api-access-xc8gc\") pod \"console-f9d7485db-vq4s5\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.264224 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b7tj\" (UniqueName: \"kubernetes.io/projected/ac9d1b61-10ee-40c7-9c98-b76dc5170609-kube-api-access-8b7tj\") pod \"console-operator-58897d9998-8l24p\" (UID: \"ac9d1b61-10ee-40c7-9c98-b76dc5170609\") " pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.282884 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxtfp"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.286091 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lxd\" (UniqueName: \"kubernetes.io/projected/814df3ce-75a6-4d82-b9b6-90c0bfba740f-kube-api-access-c6lxd\") pod \"service-ca-operator-777779d784-jv25z\" (UID: \"814df3ce-75a6-4d82-b9b6-90c0bfba740f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.302255 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20fae4bb-ed0b-412b-8c80-0f476ff9c381-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gggm6\" (UID: \"20fae4bb-ed0b-412b-8c80-0f476ff9c381\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.311629 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.319671 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7b8pk" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.320643 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.326101 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kfr4\" (UniqueName: \"kubernetes.io/projected/a1405f90-8def-4cc0-9024-278a208d9043-kube-api-access-9kfr4\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.332824 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vd9wg"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.337741 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.338167 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.83815511 +0000 UTC m=+139.199597519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.342078 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvxr\" (UniqueName: \"kubernetes.io/projected/4225fa07-37fd-4813-b101-8a2a4016c008-kube-api-access-txvxr\") pod \"router-default-5444994796-27xqq\" (UID: \"4225fa07-37fd-4813-b101-8a2a4016c008\") " pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.360223 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w7mz\" (UniqueName: \"kubernetes.io/projected/881242ee-5a63-4739-8285-ad7202079c20-kube-api-access-8w7mz\") pod \"service-ca-9c57cc56f-6qw59\" (UID: \"881242ee-5a63-4739-8285-ad7202079c20\") " pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.382425 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2bxf\" (UniqueName: \"kubernetes.io/projected/77aa25a1-a151-4f4a-99b1-11c1029c7278-kube-api-access-m2bxf\") pod \"kube-storage-version-migrator-operator-b67b599dd-lk7tb\" (UID: \"77aa25a1-a151-4f4a-99b1-11c1029c7278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.387788 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.401289 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvngh\" (UniqueName: \"kubernetes.io/projected/c4bfce20-6513-45e4-af9b-04a94fd694d1-kube-api-access-hvngh\") pod \"dns-default-jf26z\" (UID: \"c4bfce20-6513-45e4-af9b-04a94fd694d1\") " pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.423967 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g89n\" (UniqueName: \"kubernetes.io/projected/2339d1ae-d929-4442-b2d4-6bcba1646748-kube-api-access-2g89n\") pod \"ingress-canary-wkqnl\" (UID: \"2339d1ae-d929-4442-b2d4-6bcba1646748\") " pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.438430 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.438616 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.938585843 +0000 UTC m=+139.300028252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.439002 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.439494 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:16.939473083 +0000 UTC m=+139.300915562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.450581 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wkqnl" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.457689 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.462430 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9457v\" (UniqueName: \"kubernetes.io/projected/78166d97-60b0-4509-9725-19ab536a4ecd-kube-api-access-9457v\") pod \"openshift-controller-manager-operator-756b6f6bc6-4mtd7\" (UID: \"78166d97-60b0-4509-9725-19ab536a4ecd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.483154 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28fv6\" (UniqueName: \"kubernetes.io/projected/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-kube-api-access-28fv6\") pod \"collect-profiles-29483490-vsckn\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.486029 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfnt6"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.501698 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26fb4317-1a94-43ee-a438-d86b8d5416de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xsw2b\" (UID: \"26fb4317-1a94-43ee-a438-d86b8d5416de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.520316 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hsp\" (UniqueName: \"kubernetes.io/projected/6216fb1e-ffaf-478e-a533-36d1ff128b63-kube-api-access-69hsp\") pod \"csi-hostpathplugin-kcs8m\" (UID: \"6216fb1e-ffaf-478e-a533-36d1ff128b63\") " pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.522078 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.532335 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.540086 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.540281 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.040253564 +0000 UTC m=+139.401695983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.540428 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.540733 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.040721115 +0000 UTC m=+139.402163524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.545107 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.549587 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5np7g\" (UniqueName: \"kubernetes.io/projected/de85345a-34dc-48d4-9b2d-70006095c0e6-kube-api-access-5np7g\") pod \"dns-operator-744455d44c-pqdtm\" (UID: \"de85345a-34dc-48d4-9b2d-70006095c0e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.551460 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.565583 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.566153 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.572035 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5b28\" (UniqueName: \"kubernetes.io/projected/4fe9f9b1-5536-4457-ba0f-da6525f6672f-kube-api-access-q5b28\") pod \"packageserver-d55dfcdfc-k8b6r\" (UID: \"4fe9f9b1-5536-4457-ba0f-da6525f6672f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.574457 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.582633 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec344157-3705-454c-ac40-9f649e481edf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4xnql\" (UID: \"ec344157-3705-454c-ac40-9f649e481edf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.588294 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.591580 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hwfnn"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.597106 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.601693 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9pt8d"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.607960 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1405f90-8def-4cc0-9024-278a208d9043-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lwk5g\" (UID: \"a1405f90-8def-4cc0-9024-278a208d9043\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.637402 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44cg\" (UniqueName: \"kubernetes.io/projected/e57c8cd2-2bce-4b03-abfd-65bdda351a79-kube-api-access-q44cg\") pod \"olm-operator-6b444d44fb-phqqn\" (UID: \"e57c8cd2-2bce-4b03-abfd-65bdda351a79\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.642854 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.642984 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.142948479 +0000 UTC m=+139.504390888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.644746 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.645442 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.145426216 +0000 UTC m=+139.506868625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.646385 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.648901 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbf6\" (UniqueName: \"kubernetes.io/projected/10f686ac-45f7-4bf3-a087-b130d50f728c-kube-api-access-lqbf6\") pod \"machine-config-controller-84d6567774-nccn5\" (UID: \"10f686ac-45f7-4bf3-a087-b130d50f728c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.649179 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.656005 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwk8l"] Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.663793 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.675045 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9lw\" (UniqueName: \"kubernetes.io/projected/220d28eb-0825-4916-bad8-ce74e82ab0b1-kube-api-access-cs9lw\") pod \"openshift-apiserver-operator-796bbdcf4f-jzzf2\" (UID: \"220d28eb-0825-4916-bad8-ce74e82ab0b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.688116 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvcnm\" (UniqueName: \"kubernetes.io/projected/42de2336-707e-4de7-b753-bb4630b5798e-kube-api-access-xvcnm\") pod \"catalog-operator-68c6474976-jjwmr\" (UID: \"42de2336-707e-4de7-b753-bb4630b5798e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.703104 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmr5b\" (UniqueName: \"kubernetes.io/projected/2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a-kube-api-access-nmr5b\") pod \"multus-admission-controller-857f4d67dd-vkkcm\" (UID: \"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.726057 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht8jq\" (UniqueName: \"kubernetes.io/projected/8897f3cf-e9a2-40ce-9353-018d197f47b1-kube-api-access-ht8jq\") pod \"control-plane-machine-set-operator-78cbb6b69f-fsz8b\" (UID: \"8897f3cf-e9a2-40ce-9353-018d197f47b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.726491 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.745341 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.745700 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.245668535 +0000 UTC m=+139.607110954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.747253 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hw96\" (UniqueName: \"kubernetes.io/projected/4d079920-5e18-4261-a377-9c7311e4f1ef-kube-api-access-7hw96\") pod \"machine-config-server-6jv4j\" (UID: \"4d079920-5e18-4261-a377-9c7311e4f1ef\") " pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.769841 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nsrv\" (UniqueName: \"kubernetes.io/projected/5e9fd499-def3-42f1-9b76-ecc733c90a9e-kube-api-access-8nsrv\") pod \"package-server-manager-789f6589d5-fqj99\" (UID: \"5e9fd499-def3-42f1-9b76-ecc733c90a9e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.792525 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9wt\" (UniqueName: \"kubernetes.io/projected/a9a0c022-df6f-424e-baa1-9f5ed3593cde-kube-api-access-5d9wt\") pod \"machine-config-operator-74547568cd-cmxm2\" (UID: \"a9a0c022-df6f-424e-baa1-9f5ed3593cde\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.802001 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.808784 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.815064 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.837724 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.848005 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.848540 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.348521063 +0000 UTC m=+139.709963582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.881132 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.897139 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.907187 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.926957 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.937584 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" event={"ID":"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3","Type":"ContainerStarted","Data":"dee814e355b5a28e436968cc8c260c6cf7f38d3a928a5fced93eff27a229e0d9"} Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.949022 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.949187 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.449156871 +0000 UTC m=+139.810599310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:16 crc kubenswrapper[4890]: I0121 15:34:16.949450 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:16 crc kubenswrapper[4890]: E0121 15:34:16.949893 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.449877198 +0000 UTC m=+139.811319637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.003334 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.014292 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.039905 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6jv4j" Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.050741 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.050982 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.550949175 +0000 UTC m=+139.912391624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.051232 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.051741 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.551725463 +0000 UTC m=+139.913167912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.161588 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.161849 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.661812297 +0000 UTC m=+140.023254746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.162099 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.162692 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.662677197 +0000 UTC m=+140.024119646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.263938 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.264143 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.764100432 +0000 UTC m=+140.125542881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.264456 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.264895 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.76487191 +0000 UTC m=+140.126314349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.365305 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.365595 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.865547329 +0000 UTC m=+140.226989768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.365905 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.366617 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.866592733 +0000 UTC m=+140.228035302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.467564 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.467791 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.967754243 +0000 UTC m=+140.329196692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.468109 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.468655 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:17.968639033 +0000 UTC m=+140.330081482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.536306 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.569479 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.569872 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.069807393 +0000 UTC m=+140.431249822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.570442 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.570991 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.07097515 +0000 UTC m=+140.432417779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.671910 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.672104 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.172069128 +0000 UTC m=+140.533511547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.672320 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.675271 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.17522974 +0000 UTC m=+140.536672189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.775971 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.776181 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.276143204 +0000 UTC m=+140.637585653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.776570 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.777051 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.277031825 +0000 UTC m=+140.638474264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.878060 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.878648 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.378629554 +0000 UTC m=+140.740071973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:17 crc kubenswrapper[4890]: I0121 15:34:17.980220 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:17 crc kubenswrapper[4890]: E0121 15:34:17.980732 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.480706225 +0000 UTC m=+140.842148674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.081447 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.081666 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.581622409 +0000 UTC m=+140.943064858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.081967 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.082480 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.582459299 +0000 UTC m=+140.943901748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.183860 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.184185 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.6841434 +0000 UTC m=+141.045585849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.184408 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.185074 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.685041251 +0000 UTC m=+141.046483700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.286306 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.286611 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.786565529 +0000 UTC m=+141.148007998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.286985 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.287490 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.78746961 +0000 UTC m=+141.148912049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.388862 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.389186 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.889149961 +0000 UTC m=+141.250592410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.389410 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.389872 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.889851407 +0000 UTC m=+141.251293846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.491178 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.491341 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.991312314 +0000 UTC m=+141.352754763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.491740 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.498997 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:18.998979749 +0000 UTC m=+141.360422198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.593211 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.593515 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.093477677 +0000 UTC m=+141.454920126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.593629 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.594137 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.094120482 +0000 UTC m=+141.455562921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.694306 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.694600 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.194561804 +0000 UTC m=+141.556004253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.694755 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.695254 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.195229889 +0000 UTC m=+141.556672368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.761839 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.761930 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.795815 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.796006 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.29598003 +0000 UTC m=+141.657422489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.796239 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.796535 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.296527252 +0000 UTC m=+141.657969661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.897873 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.898091 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.398057661 +0000 UTC m=+141.759500110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.898188 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:18 crc kubenswrapper[4890]: E0121 15:34:18.898594 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.398576983 +0000 UTC m=+141.760019422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:18 crc kubenswrapper[4890]: I0121 15:34:18.999689 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.000173 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.500147082 +0000 UTC m=+141.861589551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.000727 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.001195 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.501178775 +0000 UTC m=+141.862621214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.102941 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.103185 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.603151454 +0000 UTC m=+141.964593903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.103897 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.104462 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.604439773 +0000 UTC m=+141.965882212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.205110 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9tff\" (UniqueName: \"kubernetes.io/projected/d8be7071-7d2a-492a-b511-be4ff4650873-kube-api-access-g9tff\") pod \"marketplace-operator-79b997595-7znlr\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.206399 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.206628 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.706577485 +0000 UTC m=+142.068019954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.206752 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.207253 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.70723169 +0000 UTC m=+142.068674299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.287207 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdkfj\" (UniqueName: \"kubernetes.io/projected/fe52ad61-000f-4e87-b181-4484719b3593-kube-api-access-vdkfj\") pod \"migrator-59844c95c7-lrdjw\" (UID: \"fe52ad61-000f-4e87-b181-4484719b3593\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.308391 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.308728 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.808713577 +0000 UTC m=+142.170155976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.312621 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.341559 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.409523 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.410052 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:19.910035841 +0000 UTC m=+142.271478310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.510882 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.010866973 +0000 UTC m=+142.372309382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.510924 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.511409 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.011403046 +0000 UTC m=+142.372845455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.511543 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.612366 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.612808 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.112789221 +0000 UTC m=+142.474231630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.713686 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.714161 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.214140325 +0000 UTC m=+142.575582774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.783717 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g"] Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.786219 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg"] Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.814319 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.814441 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.314424445 +0000 UTC m=+142.675866854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.815814 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.816191 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.316180645 +0000 UTC m=+142.677623054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.917337 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.917601 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.41757463 +0000 UTC m=+142.779017039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.918254 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:19 crc kubenswrapper[4890]: E0121 15:34:19.918701 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.418689395 +0000 UTC m=+142.780131794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.959092 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" event={"ID":"03a60911-f0d9-463b-b506-feb24e7c8c58","Type":"ContainerStarted","Data":"26dccfb7e4e47a9fd64733ca10fcf2c0ee68e1705d0f167705509a046dd31f24"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.960495 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" event={"ID":"41bfbbcf-1703-458d-a423-6b6beaa1611d","Type":"ContainerStarted","Data":"ddc8580976c2f822d511a41171b05e792d1dceae34d986f5ce78190af3edc028"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.961499 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" event={"ID":"d5324902-a12c-492c-b66c-29c0b27d84cf","Type":"ContainerStarted","Data":"8b3b0acc2d5a6e6183167eb590e02c110d6fabac1a3f7fc44784c411f5e5cbf1"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.963087 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" event={"ID":"b77f292c-56d0-4593-a084-c807b6d723ff","Type":"ContainerStarted","Data":"54c80b6c53e0a046444d101a64b5d1283941ec1ffc8b1a370d47436bd21f380d"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.964162 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" event={"ID":"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4","Type":"ContainerStarted","Data":"15e271db14de1ccd446b2d95a7751da5ef3a9d638e2ff993850c7dc19c4997c2"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.964999 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" event={"ID":"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a","Type":"ContainerStarted","Data":"143874e93632b3c6f315c3ca3921c0a04fe7a990b081292d34f43c4c38764543"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.965774 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" event={"ID":"2c6666c6-bfb9-4874-82b3-fcafc29121c1","Type":"ContainerStarted","Data":"8b818f825d504ee4a1a9a7e1345f33484db07bbe88b28b4341864d65f29be8be"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.966505 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" event={"ID":"9440b7c8-228d-452a-ba7e-ea7f3f8c0254","Type":"ContainerStarted","Data":"62dacc257bfd609aacdeafa98ec082d2c889a108a571c90c60dd459f7b92fecb"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.967469 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" event={"ID":"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3","Type":"ContainerStarted","Data":"e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.968153 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" event={"ID":"f92adb1c-7d9e-411a-b2a2-2cfd918de6de","Type":"ContainerStarted","Data":"bff3d16cebb9b8ae88dd9eda825ef8b9e68b631ce83cd494da17eb5a2a7c902f"} Jan 21 15:34:19 crc kubenswrapper[4890]: I0121 15:34:19.969140 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" event={"ID":"56c188de-f8b1-46cc-8fe4-7c58c67f1e19","Type":"ContainerStarted","Data":"8442b39ad0cacc7b0f34b9a0e4e2ccb4d30ddfdf81bd6788c354c2379f836b6d"} Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.019568 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.019754 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.519725493 +0000 UTC m=+142.881167902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.020093 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.020623 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.520604123 +0000 UTC m=+142.882046582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: W0121 15:34:20.036844 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12968d21_ebc2_42c6_9646_d377088401c4.slice/crio-160dd00df3628663be004bdbe385072b372969d6fed61ad4ceb776955b10f295 WatchSource:0}: Error finding container 160dd00df3628663be004bdbe385072b372969d6fed61ad4ceb776955b10f295: Status 404 returned error can't find the container with id 160dd00df3628663be004bdbe385072b372969d6fed61ad4ceb776955b10f295 Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.121345 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.121724 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.621685071 +0000 UTC m=+142.983127480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.121865 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.122256 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.622240763 +0000 UTC m=+142.983683172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.167441 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7b8pk"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.211153 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.222784 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.222884 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.722869691 +0000 UTC m=+143.084312100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.223169 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.223464 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.723457554 +0000 UTC m=+143.084899963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.323993 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.324152 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.824126913 +0000 UTC m=+143.185569322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.324287 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.324594 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.824581194 +0000 UTC m=+143.186023603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.424853 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.425176 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:20.92516219 +0000 UTC m=+143.286604599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.478432 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pqdtm"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.485329 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vkkcm"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.530579 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.531204 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.031191071 +0000 UTC m=+143.392633470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.632922 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.633051 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.133028206 +0000 UTC m=+143.494470615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.633487 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.634021 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.134003209 +0000 UTC m=+143.495445628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.734855 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.735029 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.235004005 +0000 UTC m=+143.596446414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.735813 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.736283 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.236261314 +0000 UTC m=+143.597703783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.779536 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.791803 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.796490 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wkqnl"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.800750 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jv25z"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.836986 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.837496 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.337482955 +0000 UTC m=+143.698925364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: W0121 15:34:20.845593 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77aa25a1_a151_4f4a_99b1_11c1029c7278.slice/crio-246dce64908f48c8b842e8d9b0f1565709d53d534727090ba56cb88b15b21054 WatchSource:0}: Error finding container 246dce64908f48c8b842e8d9b0f1565709d53d534727090ba56cb88b15b21054: Status 404 returned error can't find the container with id 246dce64908f48c8b842e8d9b0f1565709d53d534727090ba56cb88b15b21054 Jan 21 15:34:20 crc kubenswrapper[4890]: W0121 15:34:20.851517 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod814df3ce_75a6_4d82_b9b6_90c0bfba740f.slice/crio-ad6b2c76daa364bee6a24b9ca1a2fdac10cff102fbd362b7795b6f13ec4b7f0b WatchSource:0}: Error finding container ad6b2c76daa364bee6a24b9ca1a2fdac10cff102fbd362b7795b6f13ec4b7f0b: Status 404 returned error can't find the container with id ad6b2c76daa364bee6a24b9ca1a2fdac10cff102fbd362b7795b6f13ec4b7f0b Jan 21 15:34:20 crc kubenswrapper[4890]: W0121 15:34:20.855686 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2339d1ae_d929_4442_b2d4_6bcba1646748.slice/crio-fa0b3255f935455a9ef4aa4ee0217200d846cd120d996568797f4123a2b916c5 WatchSource:0}: Error finding container fa0b3255f935455a9ef4aa4ee0217200d846cd120d996568797f4123a2b916c5: Status 404 returned error can't find the container with id fa0b3255f935455a9ef4aa4ee0217200d846cd120d996568797f4123a2b916c5 Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.939010 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:20 crc kubenswrapper[4890]: E0121 15:34:20.939335 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.43932353 +0000 UTC m=+143.800765939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.959968 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vq4s5"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.965731 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.985791 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.986766 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7"] Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.993078 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wkqnl" event={"ID":"2339d1ae-d929-4442-b2d4-6bcba1646748","Type":"ContainerStarted","Data":"fa0b3255f935455a9ef4aa4ee0217200d846cd120d996568797f4123a2b916c5"} Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.994554 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" event={"ID":"a1405f90-8def-4cc0-9024-278a208d9043","Type":"ContainerStarted","Data":"89587de975640b1a76f012fc4f7fbb4e18d7089c20bc659acaa86daed4a91a34"} Jan 21 15:34:20 crc kubenswrapper[4890]: I0121 15:34:20.998312 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" event={"ID":"12968d21-ebc2-42c6-9646-d377088401c4","Type":"ContainerStarted","Data":"160dd00df3628663be004bdbe385072b372969d6fed61ad4ceb776955b10f295"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:20.999985 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-27xqq" event={"ID":"4225fa07-37fd-4813-b101-8a2a4016c008","Type":"ContainerStarted","Data":"7fac34c84c31f40154dc44948e4e3b950724835c74b3eea8621e33de35f796a0"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.001189 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" event={"ID":"814df3ce-75a6-4d82-b9b6-90c0bfba740f","Type":"ContainerStarted","Data":"ad6b2c76daa364bee6a24b9ca1a2fdac10cff102fbd362b7795b6f13ec4b7f0b"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.002474 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" event={"ID":"de85345a-34dc-48d4-9b2d-70006095c0e6","Type":"ContainerStarted","Data":"005c27651bf7b1c2262cb5d093f0a1e656d12e5491d0fa14227d035d87d83768"} Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.003502 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb91d73c6_e6ae_4496_bf1d_a00f1518e5ed.slice/crio-d7f64b8dc9567d9f65c216f3343aa1286ddeef75e9fb48e28b0d35a9bacb860d WatchSource:0}: Error finding container d7f64b8dc9567d9f65c216f3343aa1286ddeef75e9fb48e28b0d35a9bacb860d: Status 404 returned error can't find the container with id d7f64b8dc9567d9f65c216f3343aa1286ddeef75e9fb48e28b0d35a9bacb860d Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.003803 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6jv4j" event={"ID":"4d079920-5e18-4261-a377-9c7311e4f1ef","Type":"ContainerStarted","Data":"9b1d968683889184b33303cfa9c4db8b6ef4ea91dc4d35fb6420b751fb0694b1"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.005623 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" event={"ID":"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a","Type":"ContainerStarted","Data":"8a734fa7860b09d5ae3ca394d68592568e0f082e443ad871e313a2af93f092c5"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.007034 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" event={"ID":"a9a0c022-df6f-424e-baa1-9f5ed3593cde","Type":"ContainerStarted","Data":"e5577ff8720dfe9b6d0e2613ee53828b6f29ac2877f0a6ae3033d9ca07bfeb56"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.007878 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" event={"ID":"220d28eb-0825-4916-bad8-ce74e82ab0b1","Type":"ContainerStarted","Data":"e40d547e11bf5d04b45256da17079462f0a6fbc769fc23e4e58abef09ee49dc1"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.009533 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7b8pk" event={"ID":"2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f","Type":"ContainerStarted","Data":"fae2f64f7936937c9556bcf24b324ff6513686861d4094223eb7bf31f8561f63"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.010396 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" event={"ID":"77aa25a1-a151-4f4a-99b1-11c1029c7278","Type":"ContainerStarted","Data":"246dce64908f48c8b842e8d9b0f1565709d53d534727090ba56cb88b15b21054"} Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.010587 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.013924 4890 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c9dkt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.013963 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" podUID="f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.019212 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e9fd499_def3_42f1_9b76_ecc733c90a9e.slice/crio-90ea1b7ce8d87a69947bbda9f00c8b08fd106fcceea34c9d23c2806bdd7cc190 WatchSource:0}: Error finding container 90ea1b7ce8d87a69947bbda9f00c8b08fd106fcceea34c9d23c2806bdd7cc190: Status 404 returned error can't find the container with id 90ea1b7ce8d87a69947bbda9f00c8b08fd106fcceea34c9d23c2806bdd7cc190 Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.019482 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5373aa1_b2ba_47c7_bbdb_1835b9758c77.slice/crio-fae947bcac611759f280ea3159bce3708660d8cf718c390041f3234b9382e2cc WatchSource:0}: Error finding container fae947bcac611759f280ea3159bce3708660d8cf718c390041f3234b9382e2cc: Status 404 returned error can't find the container with id fae947bcac611759f280ea3159bce3708660d8cf718c390041f3234b9382e2cc Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.019625 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78166d97_60b0_4509_9725_19ab536a4ecd.slice/crio-e96b51d42a18ac3d1ee46e186a551675696e044ccbbf29de0eb4e47667d4ee94 WatchSource:0}: Error finding container e96b51d42a18ac3d1ee46e186a551675696e044ccbbf29de0eb4e47667d4ee94: Status 404 returned error can't find the container with id e96b51d42a18ac3d1ee46e186a551675696e044ccbbf29de0eb4e47667d4ee94 Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.026211 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" podStartSLOduration=124.026198694 podStartE2EDuration="2m4.026198694s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:21.024121407 +0000 UTC m=+143.385563816" watchObservedRunningTime="2026-01-21 15:34:21.026198694 +0000 UTC m=+143.387641103" Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.039378 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.039546 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.539526078 +0000 UTC m=+143.900968487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.039726 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.040428 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.540416439 +0000 UTC m=+143.901858848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.151786 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.155552 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.155977 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.655961107 +0000 UTC m=+144.017403516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.162626 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5"] Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.207434 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10f686ac_45f7_4bf3_a087_b130d50f728c.slice/crio-5c5034d9baa206fd9fff9867d32612a3dc96cc746c3c440624547c3f57f3c15d WatchSource:0}: Error finding container 5c5034d9baa206fd9fff9867d32612a3dc96cc746c3c440624547c3f57f3c15d: Status 404 returned error can't find the container with id 5c5034d9baa206fd9fff9867d32612a3dc96cc746c3c440624547c3f57f3c15d Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.214625 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.218747 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.233696 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jf26z"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.238543 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7znlr"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.243286 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.246695 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6qw59"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.248322 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.252219 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.257204 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.257774 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.757760621 +0000 UTC m=+144.119203040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.293934 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fe9f9b1_5536_4457_ba0f_da6525f6672f.slice/crio-703b85d220d5eb03cb023c735dcd9f9cd597fd1899184211667b5df0962d12fb WatchSource:0}: Error finding container 703b85d220d5eb03cb023c735dcd9f9cd597fd1899184211667b5df0962d12fb: Status 404 returned error can't find the container with id 703b85d220d5eb03cb023c735dcd9f9cd597fd1899184211667b5df0962d12fb Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.316321 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4bfce20_6513_45e4_af9b_04a94fd694d1.slice/crio-a9372047ec71d3e12ef80dd0b35d3bae5adf211d8027bb11c1e333faa0fa0aa4 WatchSource:0}: Error finding container a9372047ec71d3e12ef80dd0b35d3bae5adf211d8027bb11c1e333faa0fa0aa4: Status 404 returned error can't find the container with id a9372047ec71d3e12ef80dd0b35d3bae5adf211d8027bb11c1e333faa0fa0aa4 Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.358134 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.358296 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.858273476 +0000 UTC m=+144.219715885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.358496 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.358960 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.858941082 +0000 UTC m=+144.220383501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.444291 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kcs8m"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.447048 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8l24p"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.453950 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.458051 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql"] Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.460330 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.460447 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.960428109 +0000 UTC m=+144.321870518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.460595 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.460875 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:21.960862579 +0000 UTC m=+144.322304988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.478650 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8897f3cf_e9a2_40ce_9353_018d197f47b1.slice/crio-48aac513a149083947d20a1da73f32e74a5e801e9ddd2de520ec25aca3b95e4b WatchSource:0}: Error finding container 48aac513a149083947d20a1da73f32e74a5e801e9ddd2de520ec25aca3b95e4b: Status 404 returned error can't find the container with id 48aac513a149083947d20a1da73f32e74a5e801e9ddd2de520ec25aca3b95e4b Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.480762 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6216fb1e_ffaf_478e_a533_36d1ff128b63.slice/crio-0e4f341bee623728432e743f9dda19d8d7787c53ffe894cade5db0f966c3afc0 WatchSource:0}: Error finding container 0e4f341bee623728432e743f9dda19d8d7787c53ffe894cade5db0f966c3afc0: Status 404 returned error can't find the container with id 0e4f341bee623728432e743f9dda19d8d7787c53ffe894cade5db0f966c3afc0 Jan 21 15:34:21 crc kubenswrapper[4890]: W0121 15:34:21.526019 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec344157_3705_454c_ac40_9f649e481edf.slice/crio-3a7275982add7f5733066568319f184b8aa17303f7991918cf6748fc648935d4 WatchSource:0}: Error finding container 3a7275982add7f5733066568319f184b8aa17303f7991918cf6748fc648935d4: Status 404 returned error can't find the container with id 3a7275982add7f5733066568319f184b8aa17303f7991918cf6748fc648935d4 Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.561394 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.561665 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.06163366 +0000 UTC m=+144.423076129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.663321 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.664224 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.164202842 +0000 UTC m=+144.525645311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.765152 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.765507 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.265478724 +0000 UTC m=+144.626921143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.868542 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.869015 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.368983008 +0000 UTC m=+144.730425417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.970036 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.970238 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.470206359 +0000 UTC m=+144.831648778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:21 crc kubenswrapper[4890]: I0121 15:34:21.970715 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:21 crc kubenswrapper[4890]: E0121 15:34:21.971020 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.471005557 +0000 UTC m=+144.832447966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.024216 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" event={"ID":"881242ee-5a63-4739-8285-ad7202079c20","Type":"ContainerStarted","Data":"0fceea358b627bec1d855929ee5c3c6ff06d02f45fa39885db21774e90df630f"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.026401 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" event={"ID":"a9a0c022-df6f-424e-baa1-9f5ed3593cde","Type":"ContainerStarted","Data":"52333bb8d4f5ada44f485f9d385c8280d64c117651379f6cb4198169ea0bd7f7"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.028622 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" event={"ID":"d5373aa1-b2ba-47c7-bbdb-1835b9758c77","Type":"ContainerStarted","Data":"fae947bcac611759f280ea3159bce3708660d8cf718c390041f3234b9382e2cc"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.032787 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" event={"ID":"de85345a-34dc-48d4-9b2d-70006095c0e6","Type":"ContainerStarted","Data":"9e74e5d056a255674ad3db6ce0a10087f4dc4344429738b5d652ca34a40d3051"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.037160 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" event={"ID":"4fe9f9b1-5536-4457-ba0f-da6525f6672f","Type":"ContainerStarted","Data":"703b85d220d5eb03cb023c735dcd9f9cd597fd1899184211667b5df0962d12fb"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.041391 4890 generic.go:334] "Generic (PLEG): container finished" podID="12968d21-ebc2-42c6-9646-d377088401c4" containerID="170fea8f028342a8ea9c2dd1dd48c8d3977738581842248bffee7549b6ec02b2" exitCode=0 Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.041400 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" event={"ID":"12968d21-ebc2-42c6-9646-d377088401c4","Type":"ContainerDied","Data":"170fea8f028342a8ea9c2dd1dd48c8d3977738581842248bffee7549b6ec02b2"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.044794 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" event={"ID":"20fae4bb-ed0b-412b-8c80-0f476ff9c381","Type":"ContainerStarted","Data":"916831f90c6863f2eb9255fb0a33a0e35ecca682b684d439973a15f04ab404d6"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.046672 4890 generic.go:334] "Generic (PLEG): container finished" podID="2c6666c6-bfb9-4874-82b3-fcafc29121c1" containerID="551229312de7905dc27bd16c7a870d9f465d98c2896f8de9abe4b936990b096e" exitCode=0 Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.046734 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" event={"ID":"2c6666c6-bfb9-4874-82b3-fcafc29121c1","Type":"ContainerDied","Data":"551229312de7905dc27bd16c7a870d9f465d98c2896f8de9abe4b936990b096e"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.052291 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" event={"ID":"41bfbbcf-1703-458d-a423-6b6beaa1611d","Type":"ContainerStarted","Data":"2377edbd4a80137860e1bcd364d3a76e080bdd221d99ea1e0f9a7f6e43ac065d"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.058135 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" event={"ID":"814df3ce-75a6-4d82-b9b6-90c0bfba740f","Type":"ContainerStarted","Data":"598100325f170bf81e8d6392918cf2e7678d41cbc5ecc48d723d2f69976e149f"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.060516 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" event={"ID":"fe52ad61-000f-4e87-b181-4484719b3593","Type":"ContainerStarted","Data":"73dbf5cb8e537431a3912d894dd1e1f11fb05eb1cf847f338c6689da61048039"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.071479 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" event={"ID":"f92adb1c-7d9e-411a-b2a2-2cfd918de6de","Type":"ContainerStarted","Data":"8ab3b14b5a2fcf8da55ba37dd64b8d4cf216bc39fcb8a830d8690c4d3ff594b8"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.071785 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.072385 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.572336691 +0000 UTC m=+144.933779100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.079511 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" event={"ID":"56c188de-f8b1-46cc-8fe4-7c58c67f1e19","Type":"ContainerStarted","Data":"7c195e6969f1cc8915195cf3c3387a01501b9de38080ee553d73faa94b8382fd"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.080788 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8l24p" event={"ID":"ac9d1b61-10ee-40c7-9c98-b76dc5170609","Type":"ContainerStarted","Data":"58fc0a5a08f61babb74ca39b1eb341fec2fe76b8a457fb60906b8e3bb84cfd00"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.089043 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" event={"ID":"b77f292c-56d0-4593-a084-c807b6d723ff","Type":"ContainerStarted","Data":"f711bbb2c77ff77c740809bd27cd74ddae5e50df65e79c91cf7f020cb3141665"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.097454 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" event={"ID":"03a60911-f0d9-463b-b506-feb24e7c8c58","Type":"ContainerStarted","Data":"99e36255f49ce45fdf1af11f271de578ac4dde8f6f2208a67adf2f2d51da895c"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.099374 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7b8pk" event={"ID":"2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f","Type":"ContainerStarted","Data":"7259b5b0d2bb5ebed86a3198be5025272d12cb5907dd24305775e76bcffc7ea5"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.101508 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" event={"ID":"d5324902-a12c-492c-b66c-29c0b27d84cf","Type":"ContainerStarted","Data":"9c255ba7c7f406fead97d1aec53ed4a2109121a5c8160dd62777a5755f5b6ace"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.102629 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" event={"ID":"ec344157-3705-454c-ac40-9f649e481edf","Type":"ContainerStarted","Data":"3a7275982add7f5733066568319f184b8aa17303f7991918cf6748fc648935d4"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.111511 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vq4s5" event={"ID":"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed","Type":"ContainerStarted","Data":"5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.111683 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vq4s5" event={"ID":"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed","Type":"ContainerStarted","Data":"d7f64b8dc9567d9f65c216f3343aa1286ddeef75e9fb48e28b0d35a9bacb860d"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.114486 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jf26z" event={"ID":"c4bfce20-6513-45e4-af9b-04a94fd694d1","Type":"ContainerStarted","Data":"a9372047ec71d3e12ef80dd0b35d3bae5adf211d8027bb11c1e333faa0fa0aa4"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.118554 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-27xqq" event={"ID":"4225fa07-37fd-4813-b101-8a2a4016c008","Type":"ContainerStarted","Data":"e393cbc0c3289a40952b5c3cfb173952452581b0e4503ddbcbeb8d1173801734"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.120661 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" event={"ID":"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a","Type":"ContainerStarted","Data":"004aa9a04e5dd3db4ed2945bd14eb0e414b0ee2e11a44b789cb70fb3334d2512"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.121981 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" event={"ID":"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a","Type":"ContainerStarted","Data":"d4ff07e24c028dbee62a90d63729376b1ef62d9f3d7c7514de1560cf1415aba6"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.122885 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" event={"ID":"8897f3cf-e9a2-40ce-9353-018d197f47b1","Type":"ContainerStarted","Data":"48aac513a149083947d20a1da73f32e74a5e801e9ddd2de520ec25aca3b95e4b"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.123735 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" event={"ID":"26fb4317-1a94-43ee-a438-d86b8d5416de","Type":"ContainerStarted","Data":"82b6087516e61b77c88c22285052b436d0f06d57c859f17e6e5bcaed73edf4fd"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.124870 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" event={"ID":"d8be7071-7d2a-492a-b511-be4ff4650873","Type":"ContainerStarted","Data":"f47d90f20e184be4e03502233baa4c507be7858632c1d399f4acd0b55e1e6c2f"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.127103 4890 generic.go:334] "Generic (PLEG): container finished" podID="9440b7c8-228d-452a-ba7e-ea7f3f8c0254" containerID="67bd7d87b9ac3bc42c5cf5c99942037b6b7a290555f712cd2785e695f72ae100" exitCode=0 Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.127480 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" event={"ID":"9440b7c8-228d-452a-ba7e-ea7f3f8c0254","Type":"ContainerDied","Data":"67bd7d87b9ac3bc42c5cf5c99942037b6b7a290555f712cd2785e695f72ae100"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.135165 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" event={"ID":"a1405f90-8def-4cc0-9024-278a208d9043","Type":"ContainerStarted","Data":"f9557dee289e407294b22c2aa7b50aa4fdb8f3c118fe147dd2ec2781e80be683"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.136725 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" event={"ID":"10f686ac-45f7-4bf3-a087-b130d50f728c","Type":"ContainerStarted","Data":"5c5034d9baa206fd9fff9867d32612a3dc96cc746c3c440624547c3f57f3c15d"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.141786 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" event={"ID":"5e9fd499-def3-42f1-9b76-ecc733c90a9e","Type":"ContainerStarted","Data":"90ea1b7ce8d87a69947bbda9f00c8b08fd106fcceea34c9d23c2806bdd7cc190"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.146467 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6jv4j" event={"ID":"4d079920-5e18-4261-a377-9c7311e4f1ef","Type":"ContainerStarted","Data":"66a2c94f4d41758aa9ea67682c768b2de85919e7b6c45f1037e8e361532b7543"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.149642 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" event={"ID":"220d28eb-0825-4916-bad8-ce74e82ab0b1","Type":"ContainerStarted","Data":"d3011ec0b0a423106debf51b5f97fbec29efcd460665a32e4fcaf63a84be7d55"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.159737 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" event={"ID":"77aa25a1-a151-4f4a-99b1-11c1029c7278","Type":"ContainerStarted","Data":"fac9e9dd4ae81b50bdec99852311764eef96ae4c197d686b1ad43e746c05ab78"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.178759 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.179735 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.679715463 +0000 UTC m=+145.041157872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.181608 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" event={"ID":"e57c8cd2-2bce-4b03-abfd-65bdda351a79","Type":"ContainerStarted","Data":"db1097f0858b99cb964b040182b4713f70eb89e811d626ac33e7c00ae3a56949"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.212780 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" event={"ID":"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4","Type":"ContainerStarted","Data":"490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.275255 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" event={"ID":"42de2336-707e-4de7-b753-bb4630b5798e","Type":"ContainerStarted","Data":"d12cabfc5dc0c1fc31a7cb83481f74221758fa7478ecb8a885bae8852486aa17"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.279998 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" event={"ID":"78166d97-60b0-4509-9725-19ab536a4ecd","Type":"ContainerStarted","Data":"e96b51d42a18ac3d1ee46e186a551675696e044ccbbf29de0eb4e47667d4ee94"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.280271 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.280584 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.780569735 +0000 UTC m=+145.142012144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.293289 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" event={"ID":"6216fb1e-ffaf-478e-a533-36d1ff128b63","Type":"ContainerStarted","Data":"0e4f341bee623728432e743f9dda19d8d7787c53ffe894cade5db0f966c3afc0"} Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.306239 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.386289 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.387676 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.88766092 +0000 UTC m=+145.249103329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.490519 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.491067 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.99102019 +0000 UTC m=+145.352462599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.491414 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.492023 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:22.992003303 +0000 UTC m=+145.353445712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.594907 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.595215 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.095200249 +0000 UTC m=+145.456642648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.695915 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.696252 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.196240446 +0000 UTC m=+145.557682855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.797973 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.801281 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.301250524 +0000 UTC m=+145.662692943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.808333 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.809323 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.309303518 +0000 UTC m=+145.670745927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.911791 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:22 crc kubenswrapper[4890]: E0121 15:34:22.912111 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.412097975 +0000 UTC m=+145.773540384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.919781 4890 csr.go:261] certificate signing request csr-p2qhv is approved, waiting to be issued Jan 21 15:34:22 crc kubenswrapper[4890]: I0121 15:34:22.928807 4890 csr.go:257] certificate signing request csr-p2qhv is issued Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.015930 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.016226 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.516214952 +0000 UTC m=+145.877657361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.117040 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.117381 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.617366132 +0000 UTC m=+145.978808541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.218242 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.218865 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.718835499 +0000 UTC m=+146.080278108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.301115 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" event={"ID":"56c188de-f8b1-46cc-8fe4-7c58c67f1e19","Type":"ContainerStarted","Data":"07d6117cb2cc4204f64f3d1ccbe3616b12aa83c465acc4d8b0e96c03cbea6e54"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.304577 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jf26z" event={"ID":"c4bfce20-6513-45e4-af9b-04a94fd694d1","Type":"ContainerStarted","Data":"338157e205a1a58509e4ea246813b3589dc2c7cb2bfce8af0ec765438d9a64bf"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.304686 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jf26z" event={"ID":"c4bfce20-6513-45e4-af9b-04a94fd694d1","Type":"ContainerStarted","Data":"756d74335b66b512d90699eb894cbb5099475e5da4704315efa3056362275e17"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.304718 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.307640 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" event={"ID":"881242ee-5a63-4739-8285-ad7202079c20","Type":"ContainerStarted","Data":"3ee5cf2ad294ee630e469e79991d5bafc46ae3c681d55e668026ccf9056d4fb8"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.309206 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" event={"ID":"ec344157-3705-454c-ac40-9f649e481edf","Type":"ContainerStarted","Data":"8ca24bb69792f486f8e0bf475b64ee2aecb3218705e2484517274a7c89a66341"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.311462 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" event={"ID":"9440b7c8-228d-452a-ba7e-ea7f3f8c0254","Type":"ContainerStarted","Data":"d7df0a5f27a97fb3292d3e6526ebbe59df0c00112c1f4d6734ec1a35bb3be92f"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.311608 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.313504 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" event={"ID":"de85345a-34dc-48d4-9b2d-70006095c0e6","Type":"ContainerStarted","Data":"3c42d565e03ac2b869eaeef3b683aa4e7d9d6e4b46aea834f19c9e015c76b186"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.315036 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wkqnl" event={"ID":"2339d1ae-d929-4442-b2d4-6bcba1646748","Type":"ContainerStarted","Data":"f9b79929d7fdfbb9d3657e519f1c5e9736ccb502e8f1ad2fd0fd7733febd0f6f"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.317208 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" event={"ID":"42de2336-707e-4de7-b753-bb4630b5798e","Type":"ContainerStarted","Data":"c8f658003ac4880e4be1bd3e0a3cc61c6e8bb7b6892d4829c6791d23255f6798"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.317457 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.318699 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.318862 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.818816901 +0000 UTC m=+146.180259310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.318979 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.319620 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" event={"ID":"26fb4317-1a94-43ee-a438-d86b8d5416de","Type":"ContainerStarted","Data":"088eebb4cce94d9300a2dc0afd71d8bd3635248c7b39c5d8761c63011adc36b8"} Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.319903 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.819891836 +0000 UTC m=+146.181334245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.322230 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" event={"ID":"20fae4bb-ed0b-412b-8c80-0f476ff9c381","Type":"ContainerStarted","Data":"497e4a4408e851ea6c266c23fb1a74bd53ab8a7be9fab593fd1102912ef177cc"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.324448 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" event={"ID":"513d9ec4-2b91-4609-ba1a-0e6f0b551d1a","Type":"ContainerStarted","Data":"874f6b94dee901e734b241012de2d201eddc9d9843615a91d85d34a696cbbcdc"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.327024 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" event={"ID":"10f686ac-45f7-4bf3-a087-b130d50f728c","Type":"ContainerStarted","Data":"ddefb8d66e13cce49aeb984d3583449b5e273fd26c2c986f90f0e486eb7be90a"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.327085 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" event={"ID":"10f686ac-45f7-4bf3-a087-b130d50f728c","Type":"ContainerStarted","Data":"eeba20b58292f4a3319666e638af1c8480de7f9641d2a5f028d92e38849f36bd"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.329973 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" event={"ID":"d5373aa1-b2ba-47c7-bbdb-1835b9758c77","Type":"ContainerStarted","Data":"6c69e4fa8334a52053bc0b1cb73fcd66f218305c00dbea9f5738a65d4347ffb6"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.332789 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" event={"ID":"4fe9f9b1-5536-4457-ba0f-da6525f6672f","Type":"ContainerStarted","Data":"eabd68771fc9d2e5ed2167a921a83cb9ff9c8476bd896f2d838abe1285743133"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.333010 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.335572 4890 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8b6r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.335646 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" podUID="4fe9f9b1-5536-4457-ba0f-da6525f6672f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.336328 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" event={"ID":"2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a","Type":"ContainerStarted","Data":"adbf90a021a30f52b33f55fd8df3c8503c4c5b6f500b820d1536d4994ffd6c2f"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.339673 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" event={"ID":"a9a0c022-df6f-424e-baa1-9f5ed3593cde","Type":"ContainerStarted","Data":"4d4463efe6fcad7c5b6f45c2d9ad62dfe00e71fe93156e59e8e95fca0b1076dd"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.341965 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" event={"ID":"5e9fd499-def3-42f1-9b76-ecc733c90a9e","Type":"ContainerStarted","Data":"00b18bf2174ae90b5d3ab040b923a974749e30df75cffa4890834f9df707cd24"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.342021 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" event={"ID":"5e9fd499-def3-42f1-9b76-ecc733c90a9e","Type":"ContainerStarted","Data":"05b96e9a0b5f7e6779e45d56efc6c850b19d479262b60be787e579dc1d9a07b0"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.342149 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.343535 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" event={"ID":"78166d97-60b0-4509-9725-19ab536a4ecd","Type":"ContainerStarted","Data":"0d7c90234581430070be98cc4a3ba4eafa703feecc2b22e4c10890d9e53a72e8"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.345273 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" event={"ID":"d8be7071-7d2a-492a-b511-be4ff4650873","Type":"ContainerStarted","Data":"bc37bfac3bd57b848797a9acf4c6d19096fedfc07045ae48df6334c8aa17a50b"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.345962 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.346687 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.347104 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" event={"ID":"8897f3cf-e9a2-40ce-9353-018d197f47b1","Type":"ContainerStarted","Data":"60c5c52d0d128012b3f02a967d22c76383fbf660eeda1d5204e70bf06f156164"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.348239 4890 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7znlr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.348275 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.348484 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" event={"ID":"e57c8cd2-2bce-4b03-abfd-65bdda351a79","Type":"ContainerStarted","Data":"e9ed0efee272955f335f3a3413ce24e84967266841187102fb79042cf387890f"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.348964 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.350081 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" event={"ID":"fe52ad61-000f-4e87-b181-4484719b3593","Type":"ContainerStarted","Data":"8d1ce29a9f71a61f7c458eed44d622c813e6ec29d4a97e3e92b4330509ef63f1"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.350104 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" event={"ID":"fe52ad61-000f-4e87-b181-4484719b3593","Type":"ContainerStarted","Data":"fe11c2397c6d4ef2107a8f2513c2f9ee9f97a551799e076e3bcbf5d99cbfe2b4"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.350493 4890 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-phqqn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.350525 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" podUID="e57c8cd2-2bce-4b03-abfd-65bdda351a79" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.351755 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8f6sd" podStartSLOduration=127.351730733 podStartE2EDuration="2m7.351730733s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.351561729 +0000 UTC m=+145.713004138" watchObservedRunningTime="2026-01-21 15:34:23.351730733 +0000 UTC m=+145.713173142" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.352829 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8l24p" event={"ID":"ac9d1b61-10ee-40c7-9c98-b76dc5170609","Type":"ContainerStarted","Data":"e3b178a9bbc9c2cbb5ffda3dc8e2ab42a5af6c951b1a287928765d64654338f8"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.353709 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.358310 4890 patch_prober.go:28] interesting pod/console-operator-58897d9998-8l24p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.358387 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8l24p" podUID="ac9d1b61-10ee-40c7-9c98-b76dc5170609" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.364128 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" event={"ID":"03a60911-f0d9-463b-b506-feb24e7c8c58","Type":"ContainerStarted","Data":"2b73c452e09589773b7c57ea9c7f6f45011e2964c017ba63bbde59c5327bdbea"} Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.366177 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7b8pk" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.383295 4890 patch_prober.go:28] interesting pod/downloads-7954f5f757-7b8pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.383388 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7b8pk" podUID="2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.452461 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vd9wg" podStartSLOduration=127.452424982 podStartE2EDuration="2m7.452424982s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.396036224 +0000 UTC m=+145.757478633" watchObservedRunningTime="2026-01-21 15:34:23.452424982 +0000 UTC m=+145.813867391" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.469956 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.472205 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:23.972171783 +0000 UTC m=+146.333614192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.527635 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6qw59" podStartSLOduration=126.527617229 podStartE2EDuration="2m6.527617229s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.526311379 +0000 UTC m=+145.887753788" watchObservedRunningTime="2026-01-21 15:34:23.527617229 +0000 UTC m=+145.889059638" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.528465 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" podStartSLOduration=126.528456208 podStartE2EDuration="2m6.528456208s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.465697335 +0000 UTC m=+145.827139744" watchObservedRunningTime="2026-01-21 15:34:23.528456208 +0000 UTC m=+145.889898617" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.535246 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.545339 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:23 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:23 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:23 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.545464 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.571975 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" podStartSLOduration=126.571959432 podStartE2EDuration="2m6.571959432s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.570796725 +0000 UTC m=+145.932239134" watchObservedRunningTime="2026-01-21 15:34:23.571959432 +0000 UTC m=+145.933401841" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.611849 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.612224 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.112211691 +0000 UTC m=+146.473654100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.716780 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fsz8b" podStartSLOduration=127.716749808 podStartE2EDuration="2m7.716749808s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.635594964 +0000 UTC m=+145.997037373" watchObservedRunningTime="2026-01-21 15:34:23.716749808 +0000 UTC m=+146.078192217" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.719146 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.719548 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.219522291 +0000 UTC m=+146.580964700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.719936 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" podStartSLOduration=126.71992855 podStartE2EDuration="2m6.71992855s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.715407447 +0000 UTC m=+146.076849856" watchObservedRunningTime="2026-01-21 15:34:23.71992855 +0000 UTC m=+146.081370959" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.754104 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmxm2" podStartSLOduration=127.75407685 podStartE2EDuration="2m7.75407685s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.751064231 +0000 UTC m=+146.112506640" watchObservedRunningTime="2026-01-21 15:34:23.75407685 +0000 UTC m=+146.115519259" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.824280 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.824675 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.324660492 +0000 UTC m=+146.686102901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.846575 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4xnql" podStartSLOduration=127.846558742 podStartE2EDuration="2m7.846558742s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.805014303 +0000 UTC m=+146.166456712" watchObservedRunningTime="2026-01-21 15:34:23.846558742 +0000 UTC m=+146.208001151" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.893471 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4mtd7" podStartSLOduration=127.893445292 podStartE2EDuration="2m7.893445292s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.847234097 +0000 UTC m=+146.208676506" watchObservedRunningTime="2026-01-21 15:34:23.893445292 +0000 UTC m=+146.254887701" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.926052 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.926115 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wkqnl" podStartSLOduration=10.926098818 podStartE2EDuration="10.926098818s" podCreationTimestamp="2026-01-21 15:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.894033216 +0000 UTC m=+146.255475625" watchObservedRunningTime="2026-01-21 15:34:23.926098818 +0000 UTC m=+146.287541227" Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.926327 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jf26z" podStartSLOduration=9.926323483000001 podStartE2EDuration="9.926323483s" podCreationTimestamp="2026-01-21 15:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.924084822 +0000 UTC m=+146.285527231" watchObservedRunningTime="2026-01-21 15:34:23.926323483 +0000 UTC m=+146.287765892" Jan 21 15:34:23 crc kubenswrapper[4890]: E0121 15:34:23.926499 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.426476186 +0000 UTC m=+146.787918595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.929655 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 15:29:22 +0000 UTC, rotation deadline is 2026-10-09 05:42:58.513256456 +0000 UTC Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.929680 4890 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6254h8m34.583579346s for next certificate rotation Jan 21 15:34:23 crc kubenswrapper[4890]: I0121 15:34:23.956844 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gggm6" podStartSLOduration=127.956828319 podStartE2EDuration="2m7.956828319s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.954862375 +0000 UTC m=+146.316304784" watchObservedRunningTime="2026-01-21 15:34:23.956828319 +0000 UTC m=+146.318270728" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.001042 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lrdjw" podStartSLOduration=128.001026719 podStartE2EDuration="2m8.001026719s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:23.999737809 +0000 UTC m=+146.361180218" watchObservedRunningTime="2026-01-21 15:34:24.001026719 +0000 UTC m=+146.362469128" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.029227 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.029578 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.52956646 +0000 UTC m=+146.891008869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.037515 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pqdtm" podStartSLOduration=128.037502522 podStartE2EDuration="2m8.037502522s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.036595821 +0000 UTC m=+146.398038230" watchObservedRunningTime="2026-01-21 15:34:24.037502522 +0000 UTC m=+146.398944931" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.071850 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vkkcm" podStartSLOduration=128.071832985 podStartE2EDuration="2m8.071832985s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.068856458 +0000 UTC m=+146.430298867" watchObservedRunningTime="2026-01-21 15:34:24.071832985 +0000 UTC m=+146.433275394" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.137176 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.137672 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.637628558 +0000 UTC m=+146.999070967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.145866 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.147781 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.647765929 +0000 UTC m=+147.009208338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.220076 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" podStartSLOduration=128.22005519 podStartE2EDuration="2m8.22005519s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.178311077 +0000 UTC m=+146.539753486" watchObservedRunningTime="2026-01-21 15:34:24.22005519 +0000 UTC m=+146.581497599" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.285153 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.286135 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.786103428 +0000 UTC m=+147.147545877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.314925 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccn5" podStartSLOduration=128.314907816 podStartE2EDuration="2m8.314907816s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.222368723 +0000 UTC m=+146.583811132" watchObservedRunningTime="2026-01-21 15:34:24.314907816 +0000 UTC m=+146.676350225" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.373432 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" event={"ID":"12968d21-ebc2-42c6-9646-d377088401c4","Type":"ContainerStarted","Data":"51c0ab78416057cbf9ad77cfcd2a3f9db1d21608ed262ac715ce35921763824a"} Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.387612 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.387958 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.887945934 +0000 UTC m=+147.249388343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.398765 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" event={"ID":"6216fb1e-ffaf-478e-a533-36d1ff128b63","Type":"ContainerStarted","Data":"c3e1dc95428203eea74c64b83fa0b8cfcd5da9611d0a33c0b45abaf63595bf6c"} Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.424471 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" podStartSLOduration=128.424453647 podStartE2EDuration="2m8.424453647s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.316155544 +0000 UTC m=+146.677597953" watchObservedRunningTime="2026-01-21 15:34:24.424453647 +0000 UTC m=+146.785896056" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.447083 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" event={"ID":"2c6666c6-bfb9-4874-82b3-fcafc29121c1","Type":"ContainerStarted","Data":"04cb4d6fec47bc0fc9ad3c2bed8faf43d9bdaefc62e01384fb0548dcd666dec5"} Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.448396 4890 patch_prober.go:28] interesting pod/downloads-7954f5f757-7b8pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.448468 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7b8pk" podUID="2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.448737 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.448914 4890 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7znlr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.448970 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.472788 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.482621 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-phqqn" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.489195 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.489323 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.989305898 +0000 UTC m=+147.350748297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.489721 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.490012 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:24.990004194 +0000 UTC m=+147.351446603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.534906 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xsw2b" podStartSLOduration=128.534891399 podStartE2EDuration="2m8.534891399s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.440645687 +0000 UTC m=+146.802088096" watchObservedRunningTime="2026-01-21 15:34:24.534891399 +0000 UTC m=+146.896333808" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.537276 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:24 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:24 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:24 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.537331 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.591294 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.593588 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.093561729 +0000 UTC m=+147.455004138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.596588 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjwmr" podStartSLOduration=127.596576298 podStartE2EDuration="2m7.596576298s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.594279495 +0000 UTC m=+146.955721904" watchObservedRunningTime="2026-01-21 15:34:24.596576298 +0000 UTC m=+146.958018707" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.597134 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" podStartSLOduration=127.59712964 podStartE2EDuration="2m7.59712964s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.535571385 +0000 UTC m=+146.897013784" watchObservedRunningTime="2026-01-21 15:34:24.59712964 +0000 UTC m=+146.958572049" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.685806 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8l24p" podStartSLOduration=128.685786795 podStartE2EDuration="2m8.685786795s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.684580297 +0000 UTC m=+147.046022706" watchObservedRunningTime="2026-01-21 15:34:24.685786795 +0000 UTC m=+147.047229204" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.693835 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.694177 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.194165356 +0000 UTC m=+147.555607765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.761634 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7b8pk" podStartSLOduration=128.761617336 podStartE2EDuration="2m8.761617336s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.761251098 +0000 UTC m=+147.122693507" watchObservedRunningTime="2026-01-21 15:34:24.761617336 +0000 UTC m=+147.123059745" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.764452 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jv25z" podStartSLOduration=127.7644303 podStartE2EDuration="2m7.7644303s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.711016281 +0000 UTC m=+147.072458690" watchObservedRunningTime="2026-01-21 15:34:24.7644303 +0000 UTC m=+147.125872709" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.795177 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.795376 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.295333856 +0000 UTC m=+147.656776265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.795795 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.796111 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.296101924 +0000 UTC m=+147.657544333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.865533 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6jv4j" podStartSLOduration=11.865491478 podStartE2EDuration="11.865491478s" podCreationTimestamp="2026-01-21 15:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.865425537 +0000 UTC m=+147.226867956" watchObservedRunningTime="2026-01-21 15:34:24.865491478 +0000 UTC m=+147.226933887" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.896723 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.896943 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.897006 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.897023 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.897056 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.898325 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:24 crc kubenswrapper[4890]: E0121 15:34:24.898452 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.39841633 +0000 UTC m=+147.759858739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.902641 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.903238 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.911929 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.919086 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hwfnn" podStartSLOduration=128.919072372 podStartE2EDuration="2m8.919072372s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.917096116 +0000 UTC m=+147.278538525" watchObservedRunningTime="2026-01-21 15:34:24.919072372 +0000 UTC m=+147.280514781" Jan 21 15:34:24 crc kubenswrapper[4890]: I0121 15:34:24.999203 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:24.999531 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.499517279 +0000 UTC m=+147.860959688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.002038 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-27xqq" podStartSLOduration=129.002024546 podStartE2EDuration="2m9.002024546s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:24.994994935 +0000 UTC m=+147.356437344" watchObservedRunningTime="2026-01-21 15:34:25.002024546 +0000 UTC m=+147.363466955" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.046853 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vdj8x"] Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.048332 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.051682 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vdfv8" podStartSLOduration=129.051577787 podStartE2EDuration="2m9.051577787s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.036946623 +0000 UTC m=+147.398389032" watchObservedRunningTime="2026-01-21 15:34:25.051577787 +0000 UTC m=+147.413020186" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.075024 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.103678 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.104321 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.604294661 +0000 UTC m=+147.965737070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.115775 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vdj8x"] Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.130116 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" podStartSLOduration=129.1301012 podStartE2EDuration="2m9.1301012s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.128038913 +0000 UTC m=+147.489481322" watchObservedRunningTime="2026-01-21 15:34:25.1301012 +0000 UTC m=+147.491543609" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.130368 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.138744 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.146209 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.206761 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" podStartSLOduration=129.20673438 podStartE2EDuration="2m9.20673438s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.206017324 +0000 UTC m=+147.567459733" watchObservedRunningTime="2026-01-21 15:34:25.20673438 +0000 UTC m=+147.568176789" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.210710 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.210805 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-catalog-content\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.210865 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8gbl\" (UniqueName: \"kubernetes.io/projected/37c92d1b-6b73-4c8f-b5f5-39062afd3003-kube-api-access-c8gbl\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.210888 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-utilities\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.211226 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.711201622 +0000 UTC m=+148.072644221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.244517 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-56k6w"] Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.255308 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.262580 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.270758 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" podStartSLOduration=128.270741432 podStartE2EDuration="2m8.270741432s" podCreationTimestamp="2026-01-21 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.270424784 +0000 UTC m=+147.631867193" watchObservedRunningTime="2026-01-21 15:34:25.270741432 +0000 UTC m=+147.632183841" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.276099 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56k6w"] Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.314429 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.314770 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-catalog-content\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.314849 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8gbl\" (UniqueName: \"kubernetes.io/projected/37c92d1b-6b73-4c8f-b5f5-39062afd3003-kube-api-access-c8gbl\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.314873 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-utilities\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.315428 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-utilities\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.315542 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.815518794 +0000 UTC m=+148.176961203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.315799 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-catalog-content\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.337978 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8l24p" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.371316 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vq4s5" podStartSLOduration=129.371285308 podStartE2EDuration="2m9.371285308s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.369723422 +0000 UTC m=+147.731165831" watchObservedRunningTime="2026-01-21 15:34:25.371285308 +0000 UTC m=+147.732727717" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.408615 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8gbl\" (UniqueName: \"kubernetes.io/projected/37c92d1b-6b73-4c8f-b5f5-39062afd3003-kube-api-access-c8gbl\") pod \"community-operators-vdj8x\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.413781 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lwk5g" podStartSLOduration=129.413760687 podStartE2EDuration="2m9.413760687s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.408246402 +0000 UTC m=+147.769688811" watchObservedRunningTime="2026-01-21 15:34:25.413760687 +0000 UTC m=+147.775203096" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.414668 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xj52j"] Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.416237 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-utilities\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.416318 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47z6x\" (UniqueName: \"kubernetes.io/projected/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-kube-api-access-47z6x\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.416767 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:25.916750666 +0000 UTC m=+148.278193075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.416976 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.417020 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-catalog-content\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.424800 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.432232 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xj52j"] Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.451471 4890 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8b6r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.451565 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" podUID="4fe9f9b1-5536-4457-ba0f-da6525f6672f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.459216 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxtfp" podStartSLOduration=129.459197225 podStartE2EDuration="2m9.459197225s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.456617836 +0000 UTC m=+147.818060235" watchObservedRunningTime="2026-01-21 15:34:25.459197225 +0000 UTC m=+147.820639634" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.505271 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" event={"ID":"2c6666c6-bfb9-4874-82b3-fcafc29121c1","Type":"ContainerStarted","Data":"faf6e46c8f15ad2e943a7986dee57c8d4deb453853cc2740fb1f43b2e85c2014"} Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.508043 4890 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7znlr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.508123 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.519701 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.520168 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-catalog-content\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.520317 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-utilities\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.520397 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47z6x\" (UniqueName: \"kubernetes.io/projected/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-kube-api-access-47z6x\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.521069 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-catalog-content\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.521240 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-utilities\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.521904 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.021871996 +0000 UTC m=+148.383314545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.535989 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:25 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:25 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:25 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.536038 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.610044 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47z6x\" (UniqueName: \"kubernetes.io/projected/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-kube-api-access-47z6x\") pod \"certified-operators-56k6w\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.621860 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2x9l\" (UniqueName: \"kubernetes.io/projected/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-kube-api-access-g2x9l\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.622147 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-catalog-content\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.622180 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.622236 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-utilities\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.625926 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.125910822 +0000 UTC m=+148.487353301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.644510 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.659213 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzzf2" podStartSLOduration=129.659191621 podStartE2EDuration="2m9.659191621s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.587089705 +0000 UTC m=+147.948532114" watchObservedRunningTime="2026-01-21 15:34:25.659191621 +0000 UTC m=+148.020634030" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.660309 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zwlpl" podStartSLOduration=129.660303097 podStartE2EDuration="2m9.660303097s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.508800978 +0000 UTC m=+147.870243387" watchObservedRunningTime="2026-01-21 15:34:25.660303097 +0000 UTC m=+148.021745506" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.668964 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.670751 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lk7tb" podStartSLOduration=129.670729495 podStartE2EDuration="2m9.670729495s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:25.658798733 +0000 UTC m=+148.020241142" watchObservedRunningTime="2026-01-21 15:34:25.670729495 +0000 UTC m=+148.032171904" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.731189 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.731558 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-catalog-content\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.731696 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-utilities\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.731726 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2x9l\" (UniqueName: \"kubernetes.io/projected/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-kube-api-access-g2x9l\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.732231 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.232211279 +0000 UTC m=+148.593653688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.732712 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-catalog-content\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.734116 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-utilities\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.793300 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kxtgf"] Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.803197 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.836115 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.836722 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.336711255 +0000 UTC m=+148.698153664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.896853 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2x9l\" (UniqueName: \"kubernetes.io/projected/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-kube-api-access-g2x9l\") pod \"community-operators-xj52j\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.947183 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.947341 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-catalog-content\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.947411 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ccg5\" (UniqueName: \"kubernetes.io/projected/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-kube-api-access-6ccg5\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.947436 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-utilities\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:25 crc kubenswrapper[4890]: E0121 15:34:25.947853 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.447834351 +0000 UTC m=+148.809276750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:25 crc kubenswrapper[4890]: I0121 15:34:25.961400 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxtgf"] Jan 21 15:34:26 crc kubenswrapper[4890]: W0121 15:34:26.044474 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-63cff31174e146e80c20d9f0e653fe9ff2beb50e4c058a9af56edb23fba09983 WatchSource:0}: Error finding container 63cff31174e146e80c20d9f0e653fe9ff2beb50e4c058a9af56edb23fba09983: Status 404 returned error can't find the container with id 63cff31174e146e80c20d9f0e653fe9ff2beb50e4c058a9af56edb23fba09983 Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.046668 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.049481 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.049550 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-catalog-content\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.049617 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ccg5\" (UniqueName: \"kubernetes.io/projected/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-kube-api-access-6ccg5\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.049647 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-utilities\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.050164 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-utilities\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.050245 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.550216559 +0000 UTC m=+148.911659138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.050429 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-catalog-content\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.092374 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.093482 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.095192 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.096286 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" podStartSLOduration=130.096275451 podStartE2EDuration="2m10.096275451s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:26.093032617 +0000 UTC m=+148.454475026" watchObservedRunningTime="2026-01-21 15:34:26.096275451 +0000 UTC m=+148.457717860" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.097290 4890 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9pt8d container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.097335 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" podUID="2c6666c6-bfb9-4874-82b3-fcafc29121c1" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.148232 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.148527 4890 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zrs8z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.148565 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" podUID="9440b7c8-228d-452a-ba7e-ea7f3f8c0254" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.150135 4890 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zrs8z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.150259 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" podUID="9440b7c8-228d-452a-ba7e-ea7f3f8c0254" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.165310 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.166516 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ccg5\" (UniqueName: \"kubernetes.io/projected/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-kube-api-access-6ccg5\") pod \"certified-operators-kxtgf\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.167032 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.667014486 +0000 UTC m=+149.028456895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.217565 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.273659 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.273997 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.773983539 +0000 UTC m=+149.135425948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.314098 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.315087 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.322561 4890 patch_prober.go:28] interesting pod/downloads-7954f5f757-7b8pk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.322640 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7b8pk" podUID="2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.322952 4890 patch_prober.go:28] interesting pod/downloads-7954f5f757-7b8pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.322976 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7b8pk" podUID="2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.376040 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.377729 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.877714987 +0000 UTC m=+149.239157396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.389696 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.389732 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.418956 4890 patch_prober.go:28] interesting pod/console-f9d7485db-vq4s5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.419043 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vq4s5" podUID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 21 15:34:26 crc kubenswrapper[4890]: W0121 15:34:26.422848 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-563dc904b192c499dd7c5e03dff00ca6e1d7c1f911237224c34634570255ccc2 WatchSource:0}: Error finding container 563dc904b192c499dd7c5e03dff00ca6e1d7c1f911237224c34634570255ccc2: Status 404 returned error can't find the container with id 563dc904b192c499dd7c5e03dff00ca6e1d7c1f911237224c34634570255ccc2 Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.477341 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.478821 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:26.978809305 +0000 UTC m=+149.340251714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.532586 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.546887 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:26 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:26 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:26 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.547501 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.559228 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" event={"ID":"6216fb1e-ffaf-478e-a533-36d1ff128b63","Type":"ContainerStarted","Data":"72cefe389417f190645b4068aa1863558afa5a4183218f5c46aabef771c8b666"} Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.569049 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"563dc904b192c499dd7c5e03dff00ca6e1d7c1f911237224c34634570255ccc2"} Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.578182 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.579543 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.079510535 +0000 UTC m=+149.440952934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.597696 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"63cff31174e146e80c20d9f0e653fe9ff2beb50e4c058a9af56edb23fba09983"} Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.599845 4890 generic.go:334] "Generic (PLEG): container finished" podID="d5373aa1-b2ba-47c7-bbdb-1835b9758c77" containerID="6c69e4fa8334a52053bc0b1cb73fcd66f218305c00dbea9f5738a65d4347ffb6" exitCode=0 Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.600523 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" event={"ID":"d5373aa1-b2ba-47c7-bbdb-1835b9758c77","Type":"ContainerDied","Data":"6c69e4fa8334a52053bc0b1cb73fcd66f218305c00dbea9f5738a65d4347ffb6"} Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.605849 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8b6r" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.683569 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.685155 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.185139057 +0000 UTC m=+149.546581466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.756086 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.762865 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56k6w"] Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.785567 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.785833 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.285818156 +0000 UTC m=+149.647260575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.887452 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.887721 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.387709722 +0000 UTC m=+149.749152121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.916685 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vdj8x"] Jan 21 15:34:26 crc kubenswrapper[4890]: W0121 15:34:26.973498 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37c92d1b_6b73_4c8f_b5f5_39062afd3003.slice/crio-495a63a37d23de1dea7b6125c4a64d4959d96333e231afbf80b231755d206ab6 WatchSource:0}: Error finding container 495a63a37d23de1dea7b6125c4a64d4959d96333e231afbf80b231755d206ab6: Status 404 returned error can't find the container with id 495a63a37d23de1dea7b6125c4a64d4959d96333e231afbf80b231755d206ab6 Jan 21 15:34:26 crc kubenswrapper[4890]: I0121 15:34:26.987974 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:26 crc kubenswrapper[4890]: E0121 15:34:26.988470 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.488454002 +0000 UTC m=+149.849896411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.018936 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xj52j"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.092735 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.093188 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.593172694 +0000 UTC m=+149.954615103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.193676 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.193793 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.693776011 +0000 UTC m=+150.055218420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.193995 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.194273 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.694267192 +0000 UTC m=+150.055709601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.230983 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5vt8l"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.233688 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.237050 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.239155 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vt8l"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.247224 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxtgf"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.309236 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.309420 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.809396041 +0000 UTC m=+150.170838450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.309517 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-utilities\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.309585 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv428\" (UniqueName: \"kubernetes.io/projected/b30c6789-488c-4191-bbb2-24ff82f8c648-kube-api-access-rv428\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.309625 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.309666 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-catalog-content\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.309942 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.809930283 +0000 UTC m=+150.171372692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.385143 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.385762 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.389000 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.389096 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.403992 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.410175 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.410324 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-utilities\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.410408 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv428\" (UniqueName: \"kubernetes.io/projected/b30c6789-488c-4191-bbb2-24ff82f8c648-kube-api-access-rv428\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.410468 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.410491 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.410512 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-catalog-content\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.410936 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-catalog-content\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.411014 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:27.910996671 +0000 UTC m=+150.272439080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.411233 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-utilities\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.467117 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv428\" (UniqueName: \"kubernetes.io/projected/b30c6789-488c-4191-bbb2-24ff82f8c648-kube-api-access-rv428\") pod \"redhat-marketplace-5vt8l\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.512051 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.512101 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.512124 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.512228 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.512528 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.012513509 +0000 UTC m=+150.373955918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.537017 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.541414 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:27 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:27 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:27 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.541473 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.569377 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.613382 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.613593 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.113553756 +0000 UTC m=+150.474996165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.613994 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.614319 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.114306633 +0000 UTC m=+150.475749032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.617666 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4e8f34b0484937e73e2cfe46e98c770a4b88b256dd264ae7479e3015ce77ddcc"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.621421 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5vd5j"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.622370 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.624720 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e3b2312682ce829546a427516bebd5ccc67c4da3ec1f9b5b85e797a737b52aac"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.633943 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xj52j" event={"ID":"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219","Type":"ContainerStarted","Data":"e74f520bea38eceb479c982b846dddaba4f9484640a710ea33bd81931a82fb0e"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.636967 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56k6w" event={"ID":"4fe216b5-31a0-4a3e-aa65-c35c43fb6073","Type":"ContainerStarted","Data":"cc4c34d7a23e217b713b49be081df24b5f3ee6cd7f7e0b7d8810e0a02ed9a527"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.639938 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"887e6e154a2c2f37165bd747ff52c3aa2b520a75ba4922f054e05c1d4f68ed2f"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.640008 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6759c9550a592874e5fb93dd1308ed6169df9a0762c9f7fc2fb5bf60c05feb5b"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.640788 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.643426 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vd5j"] Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.644480 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxtgf" event={"ID":"35199ad9-530d-4bc4-b3bb-6b89cc5c477e","Type":"ContainerStarted","Data":"960054e0bdc95b7901b29d4e028a63e167f381131229df8ee499458f2d498e64"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.648236 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdj8x" event={"ID":"37c92d1b-6b73-4c8f-b5f5-39062afd3003","Type":"ContainerStarted","Data":"495a63a37d23de1dea7b6125c4a64d4959d96333e231afbf80b231755d206ab6"} Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.679186 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-svrdg" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.705797 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.714617 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.714862 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-catalog-content\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.715024 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-utilities\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.715041 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhsz\" (UniqueName: \"kubernetes.io/projected/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-kube-api-access-dvhsz\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.717136 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.21709272 +0000 UTC m=+150.578535269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.816692 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-catalog-content\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.816780 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.816872 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-utilities\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.816906 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvhsz\" (UniqueName: \"kubernetes.io/projected/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-kube-api-access-dvhsz\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.818054 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-catalog-content\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.818518 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.318497795 +0000 UTC m=+150.679940204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.818821 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-utilities\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.918007 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.918220 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.418174221 +0000 UTC m=+150.779616630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.918345 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:27 crc kubenswrapper[4890]: E0121 15:34:27.921819 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.421797694 +0000 UTC m=+150.783240103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.937173 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvhsz\" (UniqueName: \"kubernetes.io/projected/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-kube-api-access-dvhsz\") pod \"redhat-marketplace-5vd5j\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:27 crc kubenswrapper[4890]: I0121 15:34:27.939971 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.025321 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.026696 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.526674499 +0000 UTC m=+150.888116908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.134285 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.141874 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.641843729 +0000 UTC m=+151.003286138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.159889 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zrs8z" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.229310 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9pcl"] Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.237540 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.238589 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.738538676 +0000 UTC m=+151.099981085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.241095 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.249091 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.253311 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9pcl"] Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.340852 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.340969 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-catalog-content\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.341047 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-utilities\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.341077 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgmkh\" (UniqueName: \"kubernetes.io/projected/5fbee014-1292-47e2-b628-a2bf014b6f09-kube-api-access-vgmkh\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.341601 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.841580059 +0000 UTC m=+151.203022468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.414955 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4g6l9"] Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.416451 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.428372 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4g6l9"] Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.445572 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.445859 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-utilities\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.445912 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-utilities\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.445938 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgmkh\" (UniqueName: \"kubernetes.io/projected/5fbee014-1292-47e2-b628-a2bf014b6f09-kube-api-access-vgmkh\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.445970 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-catalog-content\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.446023 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtq4g\" (UniqueName: \"kubernetes.io/projected/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-kube-api-access-rtq4g\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.446061 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-catalog-content\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.446763 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-catalog-content\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.446865 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:28.946839313 +0000 UTC m=+151.308281722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.447116 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-utilities\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.475382 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgmkh\" (UniqueName: \"kubernetes.io/projected/5fbee014-1292-47e2-b628-a2bf014b6f09-kube-api-access-vgmkh\") pod \"redhat-operators-j9pcl\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.539800 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vt8l"] Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.542576 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:28 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:28 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:28 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.542637 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.551943 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-catalog-content\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.553092 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.561381 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.561457 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtq4g\" (UniqueName: \"kubernetes.io/projected/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-kube-api-access-rtq4g\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.555114 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-catalog-content\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.561731 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-utilities\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.562099 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.062070454 +0000 UTC m=+151.423513043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.565234 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-utilities\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.598476 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.608012 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtq4g\" (UniqueName: \"kubernetes.io/projected/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-kube-api-access-rtq4g\") pod \"redhat-operators-4g6l9\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.635213 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.650478 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.665288 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.675694 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.168018273 +0000 UTC m=+151.529460832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.691705 4890 generic.go:334] "Generic (PLEG): container finished" podID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerID="4afbc2a9fc3a1a5321b0e25e6790e2cbd2905b3fa9ce9af6b8c5a2ba78137728" exitCode=0 Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.692630 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdj8x" event={"ID":"37c92d1b-6b73-4c8f-b5f5-39062afd3003","Type":"ContainerDied","Data":"4afbc2a9fc3a1a5321b0e25e6790e2cbd2905b3fa9ce9af6b8c5a2ba78137728"} Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.697849 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.697851 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vt8l" event={"ID":"b30c6789-488c-4191-bbb2-24ff82f8c648","Type":"ContainerStarted","Data":"907bfe1ceb5187115dfffd02a7464bff6e890ea360115e79d5496bedd1317069"} Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.702819 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" event={"ID":"d5373aa1-b2ba-47c7-bbdb-1835b9758c77","Type":"ContainerDied","Data":"fae947bcac611759f280ea3159bce3708660d8cf718c390041f3234b9382e2cc"} Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.702870 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae947bcac611759f280ea3159bce3708660d8cf718c390041f3234b9382e2cc" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.702997 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.763042 4890 generic.go:334] "Generic (PLEG): container finished" podID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerID="a8f192b70a8584941bd3e66ad75a1b5f0eccc832b20f8abdd724c18ed4c36c46" exitCode=0 Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.763562 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xj52j" event={"ID":"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219","Type":"ContainerDied","Data":"a8f192b70a8584941bd3e66ad75a1b5f0eccc832b20f8abdd724c18ed4c36c46"} Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.767106 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28fv6\" (UniqueName: \"kubernetes.io/projected/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-kube-api-access-28fv6\") pod \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.767206 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-secret-volume\") pod \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.767238 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-config-volume\") pod \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\" (UID: \"d5373aa1-b2ba-47c7-bbdb-1835b9758c77\") " Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.767545 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.770371 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-config-volume" (OuterVolumeSpecName: "config-volume") pod "d5373aa1-b2ba-47c7-bbdb-1835b9758c77" (UID: "d5373aa1-b2ba-47c7-bbdb-1835b9758c77"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.770727 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.270710538 +0000 UTC m=+151.632152947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.787566 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d5373aa1-b2ba-47c7-bbdb-1835b9758c77" (UID: "d5373aa1-b2ba-47c7-bbdb-1835b9758c77"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.788250 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-kube-api-access-28fv6" (OuterVolumeSpecName: "kube-api-access-28fv6") pod "d5373aa1-b2ba-47c7-bbdb-1835b9758c77" (UID: "d5373aa1-b2ba-47c7-bbdb-1835b9758c77"). InnerVolumeSpecName "kube-api-access-28fv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.804308 4890 generic.go:334] "Generic (PLEG): container finished" podID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerID="4e266877516fb3ef845010209b5ee751bcb219cac0d05344c9ac4cfe4877b519" exitCode=0 Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.804450 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56k6w" event={"ID":"4fe216b5-31a0-4a3e-aa65-c35c43fb6073","Type":"ContainerDied","Data":"4e266877516fb3ef845010209b5ee751bcb219cac0d05344c9ac4cfe4877b519"} Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.812901 4890 generic.go:334] "Generic (PLEG): container finished" podID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerID="ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3" exitCode=0 Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.812984 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxtgf" event={"ID":"35199ad9-530d-4bc4-b3bb-6b89cc5c477e","Type":"ContainerDied","Data":"ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3"} Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.833998 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vd5j"] Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.847171 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" event={"ID":"6216fb1e-ffaf-478e-a533-36d1ff128b63","Type":"ContainerStarted","Data":"7f7195d44ab2191a5ee3fcb561a3accb80ad8d65cd82cdaed11b2748058fc573"} Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.908091 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:28 crc kubenswrapper[4890]: E0121 15:34:28.911558 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.411507983 +0000 UTC m=+151.772950392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.913263 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.913710 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:34:28 crc kubenswrapper[4890]: I0121 15:34:28.913756 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28fv6\" (UniqueName: \"kubernetes.io/projected/d5373aa1-b2ba-47c7-bbdb-1835b9758c77-kube-api-access-28fv6\") on node \"crc\" DevicePath \"\"" Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.014694 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.016686 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.516669044 +0000 UTC m=+151.878111453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.119026 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.120205 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.620174248 +0000 UTC m=+151.981616657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.224406 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.224950 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.7249366 +0000 UTC m=+152.086379009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.227495 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9pcl"] Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.285751 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4g6l9"] Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.318948 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.325533 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.326296 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.826275824 +0000 UTC m=+152.187718233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.427467 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.427902 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:29.927880693 +0000 UTC m=+152.289323142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.436554 4890 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.528854 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.529053 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.029024482 +0000 UTC m=+152.390466891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.529110 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.529467 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.029448052 +0000 UTC m=+152.390890461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.539755 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:29 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:29 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:29 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.539807 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.630806 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.630967 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.130942559 +0000 UTC m=+152.492384968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.631044 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.631365 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.131343998 +0000 UTC m=+152.492786407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.731505 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.731883 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.231831223 +0000 UTC m=+152.593273642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.833210 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.833665 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.333648738 +0000 UTC m=+152.695091137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.857165 4890 generic.go:334] "Generic (PLEG): container finished" podID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerID="2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a" exitCode=0 Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.857252 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vd5j" event={"ID":"2029ad20-8b9d-47d2-b9a2-37bfe9887f44","Type":"ContainerDied","Data":"2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.857310 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vd5j" event={"ID":"2029ad20-8b9d-47d2-b9a2-37bfe9887f44","Type":"ContainerStarted","Data":"c01c464a6b3eb9a8cc8d49b503bbc83566f1b23fe1b1286b5c05f5ea4e3ba36f"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.860841 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" event={"ID":"6216fb1e-ffaf-478e-a533-36d1ff128b63","Type":"ContainerStarted","Data":"9e3a14c50f81c42dc78a3b21f86ac7124a67407741a0b1105d9728e13a09ee62"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.864655 4890 generic.go:334] "Generic (PLEG): container finished" podID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerID="f155c63cb979620569a6c5f88d03bd7b875c6594a80705ba38ab690ea4684796" exitCode=0 Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.864783 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pcl" event={"ID":"5fbee014-1292-47e2-b628-a2bf014b6f09","Type":"ContainerDied","Data":"f155c63cb979620569a6c5f88d03bd7b875c6594a80705ba38ab690ea4684796"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.864807 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pcl" event={"ID":"5fbee014-1292-47e2-b628-a2bf014b6f09","Type":"ContainerStarted","Data":"3f33f2108c766b31fdd17acf39160346f4b3f6bced0ab1ef08228c767a5068d3"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.866676 4890 generic.go:334] "Generic (PLEG): container finished" podID="d13b3a82-fef5-4bb5-b671-705739ec8ed8" containerID="5f858d90c0a60e1611eee4c1d53aedac68cdc669eb9807eda6892760a363b95c" exitCode=0 Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.866749 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d13b3a82-fef5-4bb5-b671-705739ec8ed8","Type":"ContainerDied","Data":"5f858d90c0a60e1611eee4c1d53aedac68cdc669eb9807eda6892760a363b95c"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.866769 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d13b3a82-fef5-4bb5-b671-705739ec8ed8","Type":"ContainerStarted","Data":"8a5bde758a3026cb173b25f6da8c2a98ac8248b9673821e08ea47eb09854933b"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.869227 4890 generic.go:334] "Generic (PLEG): container finished" podID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerID="8cb24b9a05ae4193932cf36b8ba3b12d2a8f845215e73969dc9f61b65a41d525" exitCode=0 Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.869295 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vt8l" event={"ID":"b30c6789-488c-4191-bbb2-24ff82f8c648","Type":"ContainerDied","Data":"8cb24b9a05ae4193932cf36b8ba3b12d2a8f845215e73969dc9f61b65a41d525"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.871408 4890 generic.go:334] "Generic (PLEG): container finished" podID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerID="14b485b5bf7b581700de9d0d8dcddcddb135531ed4ce57f0c66eb1d64ce279b4" exitCode=0 Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.871455 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g6l9" event={"ID":"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6","Type":"ContainerDied","Data":"14b485b5bf7b581700de9d0d8dcddcddb135531ed4ce57f0c66eb1d64ce279b4"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.871474 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g6l9" event={"ID":"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6","Type":"ContainerStarted","Data":"98bac27d43abd2bb7386bac42e1aa417819156b77d6cb47f8e94d4ca62895cc5"} Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.936335 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.936477 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.436452675 +0000 UTC m=+152.797895084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.936614 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:29 crc kubenswrapper[4890]: E0121 15:34:29.936955 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.436948297 +0000 UTC m=+152.798390706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:29 crc kubenswrapper[4890]: I0121 15:34:29.962523 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kcs8m" podStartSLOduration=16.96249886 podStartE2EDuration="16.96249886s" podCreationTimestamp="2026-01-21 15:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:29.954573399 +0000 UTC m=+152.316015818" watchObservedRunningTime="2026-01-21 15:34:29.96249886 +0000 UTC m=+152.323941269" Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.038917 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:30 crc kubenswrapper[4890]: E0121 15:34:30.039269 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.539241412 +0000 UTC m=+152.900683821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.039527 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:30 crc kubenswrapper[4890]: E0121 15:34:30.039927 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:34:30.539915668 +0000 UTC m=+152.901358077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p86sf" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.111903 4890 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T15:34:29.436811367Z","Handler":null,"Name":""} Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.135701 4890 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.135803 4890 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.141573 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.145761 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.244858 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.535880 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:30 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:30 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:30 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.536001 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.569586 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.569650 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.698060 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p86sf\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:30 crc kubenswrapper[4890]: I0121 15:34:30.801031 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.087818 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.093377 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9pt8d" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.227023 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.260814 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kubelet-dir\") pod \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.261001 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kube-api-access\") pod \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\" (UID: \"d13b3a82-fef5-4bb5-b671-705739ec8ed8\") " Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.261521 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d13b3a82-fef5-4bb5-b671-705739ec8ed8" (UID: "d13b3a82-fef5-4bb5-b671-705739ec8ed8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.286323 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p86sf"] Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.290638 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d13b3a82-fef5-4bb5-b671-705739ec8ed8" (UID: "d13b3a82-fef5-4bb5-b671-705739ec8ed8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.366125 4890 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.366168 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13b3a82-fef5-4bb5-b671-705739ec8ed8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.536550 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:31 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:31 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:31 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.536668 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.751196 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:34:31 crc kubenswrapper[4890]: E0121 15:34:31.751711 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5373aa1-b2ba-47c7-bbdb-1835b9758c77" containerName="collect-profiles" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.751746 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5373aa1-b2ba-47c7-bbdb-1835b9758c77" containerName="collect-profiles" Jan 21 15:34:31 crc kubenswrapper[4890]: E0121 15:34:31.751757 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13b3a82-fef5-4bb5-b671-705739ec8ed8" containerName="pruner" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.751765 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13b3a82-fef5-4bb5-b671-705739ec8ed8" containerName="pruner" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.758710 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13b3a82-fef5-4bb5-b671-705739ec8ed8" containerName="pruner" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.758743 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5373aa1-b2ba-47c7-bbdb-1835b9758c77" containerName="collect-profiles" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.759147 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.761087 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.764923 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.765199 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.782328 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d31d456-1029-446c-80c4-87bdbee9fbaf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.782476 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d31d456-1029-446c-80c4-87bdbee9fbaf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.885066 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d31d456-1029-446c-80c4-87bdbee9fbaf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.885176 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d31d456-1029-446c-80c4-87bdbee9fbaf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.885272 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d31d456-1029-446c-80c4-87bdbee9fbaf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.897881 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d13b3a82-fef5-4bb5-b671-705739ec8ed8","Type":"ContainerDied","Data":"8a5bde758a3026cb173b25f6da8c2a98ac8248b9673821e08ea47eb09854933b"} Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.897921 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a5bde758a3026cb173b25f6da8c2a98ac8248b9673821e08ea47eb09854933b" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.897963 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.902705 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" event={"ID":"566f28b1-744d-4cd6-b60a-f139a071579d","Type":"ContainerStarted","Data":"c2c95160452d813b0d9dca79d98ea59b0103f10945985e8593e9f45d06fb8e78"} Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.903230 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" event={"ID":"566f28b1-744d-4cd6-b60a-f139a071579d","Type":"ContainerStarted","Data":"ee2466e3f424aeaa029450274efcba061da7fde67c8d7e19e26d6bd45e64791c"} Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.903263 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d31d456-1029-446c-80c4-87bdbee9fbaf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.937248 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" podStartSLOduration=135.93722316 podStartE2EDuration="2m15.93722316s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:31.925614625 +0000 UTC m=+154.287057044" watchObservedRunningTime="2026-01-21 15:34:31.93722316 +0000 UTC m=+154.298665569" Jan 21 15:34:31 crc kubenswrapper[4890]: I0121 15:34:31.959609 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 15:34:32 crc kubenswrapper[4890]: I0121 15:34:32.088645 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:32 crc kubenswrapper[4890]: I0121 15:34:32.410935 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:34:32 crc kubenswrapper[4890]: W0121 15:34:32.418252 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4d31d456_1029_446c_80c4_87bdbee9fbaf.slice/crio-38d5629309a0354aef2126671c4bcd5bef1db8e0023c5770214dd1c75423c393 WatchSource:0}: Error finding container 38d5629309a0354aef2126671c4bcd5bef1db8e0023c5770214dd1c75423c393: Status 404 returned error can't find the container with id 38d5629309a0354aef2126671c4bcd5bef1db8e0023c5770214dd1c75423c393 Jan 21 15:34:32 crc kubenswrapper[4890]: I0121 15:34:32.539220 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:32 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:32 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:32 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:32 crc kubenswrapper[4890]: I0121 15:34:32.539289 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:32 crc kubenswrapper[4890]: I0121 15:34:32.933960 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d31d456-1029-446c-80c4-87bdbee9fbaf","Type":"ContainerStarted","Data":"38d5629309a0354aef2126671c4bcd5bef1db8e0023c5770214dd1c75423c393"} Jan 21 15:34:32 crc kubenswrapper[4890]: I0121 15:34:32.934624 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:33 crc kubenswrapper[4890]: I0121 15:34:33.536548 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:33 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:33 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:33 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:33 crc kubenswrapper[4890]: I0121 15:34:33.537181 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:33 crc kubenswrapper[4890]: I0121 15:34:33.947326 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d31d456-1029-446c-80c4-87bdbee9fbaf","Type":"ContainerStarted","Data":"46c7c08dfd08ee915ac0bcfc3c2be10f46928fdca5efd75038739303856f2dcd"} Jan 21 15:34:33 crc kubenswrapper[4890]: I0121 15:34:33.975479 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.975454289 podStartE2EDuration="2.975454289s" podCreationTimestamp="2026-01-21 15:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:34:33.972756818 +0000 UTC m=+156.334199227" watchObservedRunningTime="2026-01-21 15:34:33.975454289 +0000 UTC m=+156.336896698" Jan 21 15:34:34 crc kubenswrapper[4890]: I0121 15:34:34.463151 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jf26z" Jan 21 15:34:34 crc kubenswrapper[4890]: I0121 15:34:34.537833 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:34 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:34 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:34 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:34 crc kubenswrapper[4890]: I0121 15:34:34.537912 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:35 crc kubenswrapper[4890]: I0121 15:34:35.535899 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:35 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:35 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:35 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:35 crc kubenswrapper[4890]: I0121 15:34:35.536013 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:35 crc kubenswrapper[4890]: I0121 15:34:35.976795 4890 generic.go:334] "Generic (PLEG): container finished" podID="4d31d456-1029-446c-80c4-87bdbee9fbaf" containerID="46c7c08dfd08ee915ac0bcfc3c2be10f46928fdca5efd75038739303856f2dcd" exitCode=0 Jan 21 15:34:35 crc kubenswrapper[4890]: I0121 15:34:35.976845 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d31d456-1029-446c-80c4-87bdbee9fbaf","Type":"ContainerDied","Data":"46c7c08dfd08ee915ac0bcfc3c2be10f46928fdca5efd75038739303856f2dcd"} Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.321685 4890 patch_prober.go:28] interesting pod/downloads-7954f5f757-7b8pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.321773 4890 patch_prober.go:28] interesting pod/downloads-7954f5f757-7b8pk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.321810 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7b8pk" podUID="2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.321857 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7b8pk" podUID="2e1c22cf-8bb6-4fa3-acb9-5b8cbfb85c5f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.389186 4890 patch_prober.go:28] interesting pod/console-f9d7485db-vq4s5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.389286 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vq4s5" podUID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.537863 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:36 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:36 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:36 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:36 crc kubenswrapper[4890]: I0121 15:34:36.537972 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:37 crc kubenswrapper[4890]: I0121 15:34:37.534333 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:37 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:37 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:37 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:37 crc kubenswrapper[4890]: I0121 15:34:37.534632 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:38 crc kubenswrapper[4890]: I0121 15:34:38.534000 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:38 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:38 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:38 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:38 crc kubenswrapper[4890]: I0121 15:34:38.534062 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:38 crc kubenswrapper[4890]: I0121 15:34:38.705942 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:38 crc kubenswrapper[4890]: I0121 15:34:38.712956 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a86abbe4-e7c5-4a3e-a8d7-02d82267ded6-metrics-certs\") pod \"network-metrics-daemon-j9mfr\" (UID: \"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6\") " pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:38 crc kubenswrapper[4890]: I0121 15:34:38.735782 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j9mfr" Jan 21 15:34:39 crc kubenswrapper[4890]: I0121 15:34:39.535433 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:39 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:39 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:39 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:39 crc kubenswrapper[4890]: I0121 15:34:39.536001 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:40 crc kubenswrapper[4890]: I0121 15:34:40.534043 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:40 crc kubenswrapper[4890]: [-]has-synced failed: reason withheld Jan 21 15:34:40 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:40 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:40 crc kubenswrapper[4890]: I0121 15:34:40.534164 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:41 crc kubenswrapper[4890]: I0121 15:34:41.544519 4890 patch_prober.go:28] interesting pod/router-default-5444994796-27xqq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:34:41 crc kubenswrapper[4890]: [+]has-synced ok Jan 21 15:34:41 crc kubenswrapper[4890]: [+]process-running ok Jan 21 15:34:41 crc kubenswrapper[4890]: healthz check failed Jan 21 15:34:41 crc kubenswrapper[4890]: I0121 15:34:41.544878 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-27xqq" podUID="4225fa07-37fd-4813-b101-8a2a4016c008" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:34:42 crc kubenswrapper[4890]: I0121 15:34:42.537628 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:42 crc kubenswrapper[4890]: I0121 15:34:42.541710 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-27xqq" Jan 21 15:34:43 crc kubenswrapper[4890]: I0121 15:34:43.395657 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:43 crc kubenswrapper[4890]: I0121 15:34:43.578160 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d31d456-1029-446c-80c4-87bdbee9fbaf-kube-api-access\") pod \"4d31d456-1029-446c-80c4-87bdbee9fbaf\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " Jan 21 15:34:43 crc kubenswrapper[4890]: I0121 15:34:43.578298 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d31d456-1029-446c-80c4-87bdbee9fbaf-kubelet-dir\") pod \"4d31d456-1029-446c-80c4-87bdbee9fbaf\" (UID: \"4d31d456-1029-446c-80c4-87bdbee9fbaf\") " Jan 21 15:34:43 crc kubenswrapper[4890]: I0121 15:34:43.578482 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d31d456-1029-446c-80c4-87bdbee9fbaf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d31d456-1029-446c-80c4-87bdbee9fbaf" (UID: "4d31d456-1029-446c-80c4-87bdbee9fbaf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:34:43 crc kubenswrapper[4890]: I0121 15:34:43.585714 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d31d456-1029-446c-80c4-87bdbee9fbaf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d31d456-1029-446c-80c4-87bdbee9fbaf" (UID: "4d31d456-1029-446c-80c4-87bdbee9fbaf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:34:43 crc kubenswrapper[4890]: I0121 15:34:43.679428 4890 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d31d456-1029-446c-80c4-87bdbee9fbaf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:34:43 crc kubenswrapper[4890]: I0121 15:34:43.679463 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d31d456-1029-446c-80c4-87bdbee9fbaf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:34:44 crc kubenswrapper[4890]: I0121 15:34:44.027002 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d31d456-1029-446c-80c4-87bdbee9fbaf","Type":"ContainerDied","Data":"38d5629309a0354aef2126671c4bcd5bef1db8e0023c5770214dd1c75423c393"} Jan 21 15:34:44 crc kubenswrapper[4890]: I0121 15:34:44.027042 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d5629309a0354aef2126671c4bcd5bef1db8e0023c5770214dd1c75423c393" Jan 21 15:34:44 crc kubenswrapper[4890]: I0121 15:34:44.027096 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:34:46 crc kubenswrapper[4890]: I0121 15:34:46.325803 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7b8pk" Jan 21 15:34:46 crc kubenswrapper[4890]: I0121 15:34:46.421924 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:46 crc kubenswrapper[4890]: I0121 15:34:46.427299 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:34:48 crc kubenswrapper[4890]: I0121 15:34:48.762072 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:34:48 crc kubenswrapper[4890]: I0121 15:34:48.762372 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:50 crc kubenswrapper[4890]: I0121 15:34:50.811278 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:34:57 crc kubenswrapper[4890]: I0121 15:34:57.007949 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqj99" Jan 21 15:35:02 crc kubenswrapper[4890]: E0121 15:35:02.308471 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:35:02 crc kubenswrapper[4890]: E0121 15:35:02.309216 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvhsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5vd5j_openshift-marketplace(2029ad20-8b9d-47d2-b9a2-37bfe9887f44): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:02 crc kubenswrapper[4890]: E0121 15:35:02.310407 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5vd5j" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" Jan 21 15:35:05 crc kubenswrapper[4890]: I0121 15:35:05.705409 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:35:08 crc kubenswrapper[4890]: I0121 15:35:08.943319 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:35:08 crc kubenswrapper[4890]: E0121 15:35:08.944309 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d31d456-1029-446c-80c4-87bdbee9fbaf" containerName="pruner" Jan 21 15:35:08 crc kubenswrapper[4890]: I0121 15:35:08.944323 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d31d456-1029-446c-80c4-87bdbee9fbaf" containerName="pruner" Jan 21 15:35:08 crc kubenswrapper[4890]: I0121 15:35:08.944539 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d31d456-1029-446c-80c4-87bdbee9fbaf" containerName="pruner" Jan 21 15:35:08 crc kubenswrapper[4890]: I0121 15:35:08.946219 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:08 crc kubenswrapper[4890]: I0121 15:35:08.949765 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:35:08 crc kubenswrapper[4890]: I0121 15:35:08.950610 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:35:08 crc kubenswrapper[4890]: I0121 15:35:08.954543 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:35:09 crc kubenswrapper[4890]: I0121 15:35:09.117337 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c085a2cf-defc-45aa-97aa-702eaae200a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:09 crc kubenswrapper[4890]: I0121 15:35:09.117587 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c085a2cf-defc-45aa-97aa-702eaae200a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:09 crc kubenswrapper[4890]: I0121 15:35:09.219053 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c085a2cf-defc-45aa-97aa-702eaae200a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:09 crc kubenswrapper[4890]: I0121 15:35:09.219162 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c085a2cf-defc-45aa-97aa-702eaae200a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:09 crc kubenswrapper[4890]: I0121 15:35:09.219224 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c085a2cf-defc-45aa-97aa-702eaae200a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:09 crc kubenswrapper[4890]: I0121 15:35:09.242231 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c085a2cf-defc-45aa-97aa-702eaae200a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:09 crc kubenswrapper[4890]: I0121 15:35:09.285798 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.337242 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.342591 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.355883 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.479922 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b04bfe8-0704-492a-a823-8defb73acbd7-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.480081 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-var-lock\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.480158 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.581641 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b04bfe8-0704-492a-a823-8defb73acbd7-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.581762 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-var-lock\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.581800 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.581897 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.581955 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-var-lock\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.608396 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b04bfe8-0704-492a-a823-8defb73acbd7-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:13 crc kubenswrapper[4890]: I0121 15:35:13.675528 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:35:17 crc kubenswrapper[4890]: E0121 15:35:17.532223 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:35:17 crc kubenswrapper[4890]: E0121 15:35:17.532630 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtq4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4g6l9_openshift-marketplace(e5b533fe-8e5c-45c4-8168-fa7a5fb323f6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:17 crc kubenswrapper[4890]: E0121 15:35:17.534415 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4g6l9" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" Jan 21 15:35:18 crc kubenswrapper[4890]: I0121 15:35:18.762218 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:35:18 crc kubenswrapper[4890]: I0121 15:35:18.762287 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:35:18 crc kubenswrapper[4890]: I0121 15:35:18.762339 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:35:18 crc kubenswrapper[4890]: I0121 15:35:18.762870 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:35:18 crc kubenswrapper[4890]: I0121 15:35:18.762954 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731" gracePeriod=600 Jan 21 15:35:22 crc kubenswrapper[4890]: I0121 15:35:22.256637 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731" exitCode=0 Jan 21 15:35:22 crc kubenswrapper[4890]: I0121 15:35:22.256737 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731"} Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.628308 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4g6l9" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.806046 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.806210 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47z6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-56k6w_openshift-marketplace(4fe216b5-31a0-4a3e-aa65-c35c43fb6073): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.807415 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-56k6w" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.810056 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.810199 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv428,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5vt8l_openshift-marketplace(b30c6789-488c-4191-bbb2-24ff82f8c648): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.811410 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5vt8l" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.900570 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.901121 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgmkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j9pcl_openshift-marketplace(5fbee014-1292-47e2-b628-a2bf014b6f09): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:22 crc kubenswrapper[4890]: E0121 15:35:22.902378 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j9pcl" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.361897 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-56k6w" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.361968 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5vt8l" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.362072 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j9pcl" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.444649 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.445495 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8gbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vdj8x_openshift-marketplace(37c92d1b-6b73-4c8f-b5f5-39062afd3003): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.446901 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vdj8x" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.483715 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.484141 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2x9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xj52j_openshift-marketplace(09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.485606 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xj52j" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.496291 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.496496 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ccg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kxtgf_openshift-marketplace(35199ad9-530d-4bc4-b3bb-6b89cc5c477e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:35:24 crc kubenswrapper[4890]: E0121 15:35:24.498149 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kxtgf" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" Jan 21 15:35:24 crc kubenswrapper[4890]: I0121 15:35:24.608144 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:35:24 crc kubenswrapper[4890]: I0121 15:35:24.681809 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:35:24 crc kubenswrapper[4890]: I0121 15:35:24.700668 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j9mfr"] Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.281167 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"67982b0233662b552433e8cc5e81f5a900b3f7fff6d2f2fc042695614d9cb5be"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.284173 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" event={"ID":"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6","Type":"ContainerStarted","Data":"16a07d5e0ebbf4e5d4645009559ce5b5c26b020b1b81514b05d7c769313174d4"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.284199 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" event={"ID":"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6","Type":"ContainerStarted","Data":"b999610ba889de2f1956db332a2deae63849572f4af8af20e21b2839f5402d5f"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.284210 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j9mfr" event={"ID":"a86abbe4-e7c5-4a3e-a8d7-02d82267ded6","Type":"ContainerStarted","Data":"58cdbeb21405ff8820f3f5f3e9990562769e57898ecb7abaab78b8b8986de232"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.286375 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c085a2cf-defc-45aa-97aa-702eaae200a2","Type":"ContainerStarted","Data":"1b18aa5d986708d5b08f04d13e0601d32ad85be75ca480f897e15555482c9012"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.286406 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c085a2cf-defc-45aa-97aa-702eaae200a2","Type":"ContainerStarted","Data":"d60c7810ad98abb90c5eb948a208d889217a23d3c441668e8217deb5cb68fdaf"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.289373 4890 generic.go:334] "Generic (PLEG): container finished" podID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerID="edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc" exitCode=0 Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.289432 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vd5j" event={"ID":"2029ad20-8b9d-47d2-b9a2-37bfe9887f44","Type":"ContainerDied","Data":"edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.293202 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b04bfe8-0704-492a-a823-8defb73acbd7","Type":"ContainerStarted","Data":"91e3461e8cda138de9ba80e9d2f1e2a99b1e05400332ae12c4669aa4e069b94a"} Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.293238 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b04bfe8-0704-492a-a823-8defb73acbd7","Type":"ContainerStarted","Data":"74672de2b266f28ec2f46972b5eb61d27256239a2844b43f3ea0ef36c204ed04"} Jan 21 15:35:25 crc kubenswrapper[4890]: E0121 15:35:25.293787 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xj52j" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" Jan 21 15:35:25 crc kubenswrapper[4890]: E0121 15:35:25.296770 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vdj8x" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" Jan 21 15:35:25 crc kubenswrapper[4890]: E0121 15:35:25.296772 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kxtgf" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.334976 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=17.334942493 podStartE2EDuration="17.334942493s" podCreationTimestamp="2026-01-21 15:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:35:25.333470038 +0000 UTC m=+207.694912457" watchObservedRunningTime="2026-01-21 15:35:25.334942493 +0000 UTC m=+207.696384902" Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.371041 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-j9mfr" podStartSLOduration=189.371023523 podStartE2EDuration="3m9.371023523s" podCreationTimestamp="2026-01-21 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:35:25.368704508 +0000 UTC m=+207.730146927" watchObservedRunningTime="2026-01-21 15:35:25.371023523 +0000 UTC m=+207.732465932" Jan 21 15:35:25 crc kubenswrapper[4890]: I0121 15:35:25.439252 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.439237049 podStartE2EDuration="12.439237049s" podCreationTimestamp="2026-01-21 15:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:35:25.437777395 +0000 UTC m=+207.799219824" watchObservedRunningTime="2026-01-21 15:35:25.439237049 +0000 UTC m=+207.800679458" Jan 21 15:35:26 crc kubenswrapper[4890]: I0121 15:35:26.299802 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vd5j" event={"ID":"2029ad20-8b9d-47d2-b9a2-37bfe9887f44","Type":"ContainerStarted","Data":"c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152"} Jan 21 15:35:26 crc kubenswrapper[4890]: I0121 15:35:26.301462 4890 generic.go:334] "Generic (PLEG): container finished" podID="c085a2cf-defc-45aa-97aa-702eaae200a2" containerID="1b18aa5d986708d5b08f04d13e0601d32ad85be75ca480f897e15555482c9012" exitCode=0 Jan 21 15:35:26 crc kubenswrapper[4890]: I0121 15:35:26.301707 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c085a2cf-defc-45aa-97aa-702eaae200a2","Type":"ContainerDied","Data":"1b18aa5d986708d5b08f04d13e0601d32ad85be75ca480f897e15555482c9012"} Jan 21 15:35:26 crc kubenswrapper[4890]: I0121 15:35:26.321527 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5vd5j" podStartSLOduration=3.460197449 podStartE2EDuration="59.321512782s" podCreationTimestamp="2026-01-21 15:34:27 +0000 UTC" firstStartedPulling="2026-01-21 15:34:29.85956407 +0000 UTC m=+152.221006479" lastFinishedPulling="2026-01-21 15:35:25.720879403 +0000 UTC m=+208.082321812" observedRunningTime="2026-01-21 15:35:26.31934368 +0000 UTC m=+208.680786089" watchObservedRunningTime="2026-01-21 15:35:26.321512782 +0000 UTC m=+208.682955191" Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.513885 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.731016 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c085a2cf-defc-45aa-97aa-702eaae200a2-kubelet-dir\") pod \"c085a2cf-defc-45aa-97aa-702eaae200a2\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.731131 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c085a2cf-defc-45aa-97aa-702eaae200a2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c085a2cf-defc-45aa-97aa-702eaae200a2" (UID: "c085a2cf-defc-45aa-97aa-702eaae200a2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.731172 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c085a2cf-defc-45aa-97aa-702eaae200a2-kube-api-access\") pod \"c085a2cf-defc-45aa-97aa-702eaae200a2\" (UID: \"c085a2cf-defc-45aa-97aa-702eaae200a2\") " Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.732067 4890 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c085a2cf-defc-45aa-97aa-702eaae200a2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.738515 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c085a2cf-defc-45aa-97aa-702eaae200a2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c085a2cf-defc-45aa-97aa-702eaae200a2" (UID: "c085a2cf-defc-45aa-97aa-702eaae200a2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.833226 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c085a2cf-defc-45aa-97aa-702eaae200a2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.942063 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:35:27 crc kubenswrapper[4890]: I0121 15:35:27.942105 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:35:28 crc kubenswrapper[4890]: I0121 15:35:28.084576 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:35:28 crc kubenswrapper[4890]: I0121 15:35:28.325511 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:35:28 crc kubenswrapper[4890]: I0121 15:35:28.325768 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c085a2cf-defc-45aa-97aa-702eaae200a2","Type":"ContainerDied","Data":"d60c7810ad98abb90c5eb948a208d889217a23d3c441668e8217deb5cb68fdaf"} Jan 21 15:35:28 crc kubenswrapper[4890]: I0121 15:35:28.325830 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60c7810ad98abb90c5eb948a208d889217a23d3c441668e8217deb5cb68fdaf" Jan 21 15:35:31 crc kubenswrapper[4890]: I0121 15:35:31.172317 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwk8l"] Jan 21 15:35:37 crc kubenswrapper[4890]: I0121 15:35:37.980562 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.378417 4890 generic.go:334] "Generic (PLEG): container finished" podID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerID="6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2" exitCode=0 Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.378487 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxtgf" event={"ID":"35199ad9-530d-4bc4-b3bb-6b89cc5c477e","Type":"ContainerDied","Data":"6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2"} Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.381564 4890 generic.go:334] "Generic (PLEG): container finished" podID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerID="1d79ef7fbbd1c6046291a5187af6d63974ff7777e0f07684a3730eaf8268812e" exitCode=0 Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.381639 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g6l9" event={"ID":"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6","Type":"ContainerDied","Data":"1d79ef7fbbd1c6046291a5187af6d63974ff7777e0f07684a3730eaf8268812e"} Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.384992 4890 generic.go:334] "Generic (PLEG): container finished" podID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerID="5ff4ba00f37a5c1e5d4be77ee9f8586a8ea38230b9554b98b78b8ee50999f72f" exitCode=0 Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.385065 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pcl" event={"ID":"5fbee014-1292-47e2-b628-a2bf014b6f09","Type":"ContainerDied","Data":"5ff4ba00f37a5c1e5d4be77ee9f8586a8ea38230b9554b98b78b8ee50999f72f"} Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.393140 4890 generic.go:334] "Generic (PLEG): container finished" podID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerID="73a03f3e105426264219a66e4d16997464f3612c1dee3ba97c5062a8ba018b13" exitCode=0 Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.393279 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xj52j" event={"ID":"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219","Type":"ContainerDied","Data":"73a03f3e105426264219a66e4d16997464f3612c1dee3ba97c5062a8ba018b13"} Jan 21 15:35:39 crc kubenswrapper[4890]: I0121 15:35:39.399246 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56k6w" event={"ID":"4fe216b5-31a0-4a3e-aa65-c35c43fb6073","Type":"ContainerStarted","Data":"5e7560c018dafbb419cdedfd636b9fcd40bace05b7dce41612a565a149aeff7e"} Jan 21 15:35:40 crc kubenswrapper[4890]: I0121 15:35:40.381685 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vd5j"] Jan 21 15:35:40 crc kubenswrapper[4890]: I0121 15:35:40.381946 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5vd5j" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="registry-server" containerID="cri-o://c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152" gracePeriod=2 Jan 21 15:35:40 crc kubenswrapper[4890]: I0121 15:35:40.407344 4890 generic.go:334] "Generic (PLEG): container finished" podID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerID="5e7560c018dafbb419cdedfd636b9fcd40bace05b7dce41612a565a149aeff7e" exitCode=0 Jan 21 15:35:40 crc kubenswrapper[4890]: I0121 15:35:40.407559 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56k6w" event={"ID":"4fe216b5-31a0-4a3e-aa65-c35c43fb6073","Type":"ContainerDied","Data":"5e7560c018dafbb419cdedfd636b9fcd40bace05b7dce41612a565a149aeff7e"} Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.102500 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.220445 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-utilities\") pod \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.220505 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-catalog-content\") pod \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.220527 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvhsz\" (UniqueName: \"kubernetes.io/projected/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-kube-api-access-dvhsz\") pod \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\" (UID: \"2029ad20-8b9d-47d2-b9a2-37bfe9887f44\") " Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.221474 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-utilities" (OuterVolumeSpecName: "utilities") pod "2029ad20-8b9d-47d2-b9a2-37bfe9887f44" (UID: "2029ad20-8b9d-47d2-b9a2-37bfe9887f44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.226207 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-kube-api-access-dvhsz" (OuterVolumeSpecName: "kube-api-access-dvhsz") pod "2029ad20-8b9d-47d2-b9a2-37bfe9887f44" (UID: "2029ad20-8b9d-47d2-b9a2-37bfe9887f44"). InnerVolumeSpecName "kube-api-access-dvhsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.266410 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2029ad20-8b9d-47d2-b9a2-37bfe9887f44" (UID: "2029ad20-8b9d-47d2-b9a2-37bfe9887f44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.321824 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.321861 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.321876 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvhsz\" (UniqueName: \"kubernetes.io/projected/2029ad20-8b9d-47d2-b9a2-37bfe9887f44-kube-api-access-dvhsz\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.416043 4890 generic.go:334] "Generic (PLEG): container finished" podID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerID="c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152" exitCode=0 Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.416119 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vd5j" event={"ID":"2029ad20-8b9d-47d2-b9a2-37bfe9887f44","Type":"ContainerDied","Data":"c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152"} Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.416157 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vd5j" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.416174 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vd5j" event={"ID":"2029ad20-8b9d-47d2-b9a2-37bfe9887f44","Type":"ContainerDied","Data":"c01c464a6b3eb9a8cc8d49b503bbc83566f1b23fe1b1286b5c05f5ea4e3ba36f"} Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.416218 4890 scope.go:117] "RemoveContainer" containerID="c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152" Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.455849 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vd5j"] Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.458236 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vd5j"] Jan 21 15:35:41 crc kubenswrapper[4890]: I0121 15:35:41.922683 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" path="/var/lib/kubelet/pods/2029ad20-8b9d-47d2-b9a2-37bfe9887f44/volumes" Jan 21 15:35:42 crc kubenswrapper[4890]: I0121 15:35:42.235421 4890 scope.go:117] "RemoveContainer" containerID="edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc" Jan 21 15:35:43 crc kubenswrapper[4890]: I0121 15:35:43.024307 4890 scope.go:117] "RemoveContainer" containerID="2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a" Jan 21 15:35:43 crc kubenswrapper[4890]: I0121 15:35:43.366062 4890 scope.go:117] "RemoveContainer" containerID="c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152" Jan 21 15:35:43 crc kubenswrapper[4890]: E0121 15:35:43.366715 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152\": container with ID starting with c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152 not found: ID does not exist" containerID="c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152" Jan 21 15:35:43 crc kubenswrapper[4890]: I0121 15:35:43.366749 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152"} err="failed to get container status \"c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152\": rpc error: code = NotFound desc = could not find container \"c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152\": container with ID starting with c7b21b008bffc2940cc7d36a7c344061a012844415b36f6cf4ea3d05d6115152 not found: ID does not exist" Jan 21 15:35:43 crc kubenswrapper[4890]: I0121 15:35:43.366777 4890 scope.go:117] "RemoveContainer" containerID="edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc" Jan 21 15:35:43 crc kubenswrapper[4890]: E0121 15:35:43.367148 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc\": container with ID starting with edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc not found: ID does not exist" containerID="edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc" Jan 21 15:35:43 crc kubenswrapper[4890]: I0121 15:35:43.367175 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc"} err="failed to get container status \"edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc\": rpc error: code = NotFound desc = could not find container \"edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc\": container with ID starting with edaa3dda54e759263b970e48e8cb927c60d4f4a4158f2c7e56c6648de213e9dc not found: ID does not exist" Jan 21 15:35:43 crc kubenswrapper[4890]: I0121 15:35:43.367192 4890 scope.go:117] "RemoveContainer" containerID="2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a" Jan 21 15:35:43 crc kubenswrapper[4890]: E0121 15:35:43.367450 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a\": container with ID starting with 2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a not found: ID does not exist" containerID="2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a" Jan 21 15:35:43 crc kubenswrapper[4890]: I0121 15:35:43.367480 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a"} err="failed to get container status \"2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a\": rpc error: code = NotFound desc = could not find container \"2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a\": container with ID starting with 2925db3ea3ceb909aa38e60abf40e5cd2ba52dc0bbac2af0aa2727ddeef6592a not found: ID does not exist" Jan 21 15:35:46 crc kubenswrapper[4890]: I0121 15:35:46.461523 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdj8x" event={"ID":"37c92d1b-6b73-4c8f-b5f5-39062afd3003","Type":"ContainerStarted","Data":"749090133a998a9125312c009da85ab6f5008d548549e121db7a5a0077128466"} Jan 21 15:35:46 crc kubenswrapper[4890]: I0121 15:35:46.476404 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vt8l" event={"ID":"b30c6789-488c-4191-bbb2-24ff82f8c648","Type":"ContainerStarted","Data":"64d1b7005df06540dc61b6adda032bec5b05bc1933dad8b6e77c816852c2c479"} Jan 21 15:35:46 crc kubenswrapper[4890]: I0121 15:35:46.510469 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xj52j" event={"ID":"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219","Type":"ContainerStarted","Data":"2accf25d3f4fb1076c0981f8104f2a65732cb74de7dc36d86ff38d999fc6f564"} Jan 21 15:35:46 crc kubenswrapper[4890]: I0121 15:35:46.517540 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxtgf" event={"ID":"35199ad9-530d-4bc4-b3bb-6b89cc5c477e","Type":"ContainerStarted","Data":"c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678"} Jan 21 15:35:46 crc kubenswrapper[4890]: I0121 15:35:46.546612 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xj52j" podStartSLOduration=4.267270722 podStartE2EDuration="1m21.546591587s" podCreationTimestamp="2026-01-21 15:34:25 +0000 UTC" firstStartedPulling="2026-01-21 15:34:28.814193891 +0000 UTC m=+151.175636300" lastFinishedPulling="2026-01-21 15:35:46.093514736 +0000 UTC m=+228.454957165" observedRunningTime="2026-01-21 15:35:46.544977018 +0000 UTC m=+228.906419427" watchObservedRunningTime="2026-01-21 15:35:46.546591587 +0000 UTC m=+228.908033996" Jan 21 15:35:46 crc kubenswrapper[4890]: I0121 15:35:46.581974 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kxtgf" podStartSLOduration=4.37381046 podStartE2EDuration="1m21.581912999s" podCreationTimestamp="2026-01-21 15:34:25 +0000 UTC" firstStartedPulling="2026-01-21 15:34:28.846080659 +0000 UTC m=+151.207523068" lastFinishedPulling="2026-01-21 15:35:46.054183188 +0000 UTC m=+228.415625607" observedRunningTime="2026-01-21 15:35:46.577860642 +0000 UTC m=+228.939303051" watchObservedRunningTime="2026-01-21 15:35:46.581912999 +0000 UTC m=+228.943355408" Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.525708 4890 generic.go:334] "Generic (PLEG): container finished" podID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerID="64d1b7005df06540dc61b6adda032bec5b05bc1933dad8b6e77c816852c2c479" exitCode=0 Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.525793 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vt8l" event={"ID":"b30c6789-488c-4191-bbb2-24ff82f8c648","Type":"ContainerDied","Data":"64d1b7005df06540dc61b6adda032bec5b05bc1933dad8b6e77c816852c2c479"} Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.529662 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56k6w" event={"ID":"4fe216b5-31a0-4a3e-aa65-c35c43fb6073","Type":"ContainerStarted","Data":"7890aaaa7455c555fad8510c98ea107cefb911ebdaf04897c2e06dd35cfa40ee"} Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.538950 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g6l9" event={"ID":"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6","Type":"ContainerStarted","Data":"e5f5a081503d56e4e1604451fd9d63cd4f5537dd8b47332131e4c1d8c96ee49b"} Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.541833 4890 generic.go:334] "Generic (PLEG): container finished" podID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerID="749090133a998a9125312c009da85ab6f5008d548549e121db7a5a0077128466" exitCode=0 Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.541918 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdj8x" event={"ID":"37c92d1b-6b73-4c8f-b5f5-39062afd3003","Type":"ContainerDied","Data":"749090133a998a9125312c009da85ab6f5008d548549e121db7a5a0077128466"} Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.544991 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pcl" event={"ID":"5fbee014-1292-47e2-b628-a2bf014b6f09","Type":"ContainerStarted","Data":"91ee9f94f13bd30b3a0d1b148970f2a6326bec387874c7cc179a315c8f225a9f"} Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.578732 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9pcl" podStartSLOduration=3.390302004 podStartE2EDuration="1m19.578714211s" podCreationTimestamp="2026-01-21 15:34:28 +0000 UTC" firstStartedPulling="2026-01-21 15:34:29.86614804 +0000 UTC m=+152.227590449" lastFinishedPulling="2026-01-21 15:35:46.054560237 +0000 UTC m=+228.416002656" observedRunningTime="2026-01-21 15:35:47.57487411 +0000 UTC m=+229.936316519" watchObservedRunningTime="2026-01-21 15:35:47.578714211 +0000 UTC m=+229.940156620" Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.633560 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-56k6w" podStartSLOduration=5.381823151 podStartE2EDuration="1m22.633546178s" podCreationTimestamp="2026-01-21 15:34:25 +0000 UTC" firstStartedPulling="2026-01-21 15:34:28.813892944 +0000 UTC m=+151.175335353" lastFinishedPulling="2026-01-21 15:35:46.065615971 +0000 UTC m=+228.427058380" observedRunningTime="2026-01-21 15:35:47.613309886 +0000 UTC m=+229.974752315" watchObservedRunningTime="2026-01-21 15:35:47.633546178 +0000 UTC m=+229.994988587" Jan 21 15:35:47 crc kubenswrapper[4890]: I0121 15:35:47.653615 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4g6l9" podStartSLOduration=3.025504839 podStartE2EDuration="1m19.653598356s" podCreationTimestamp="2026-01-21 15:34:28 +0000 UTC" firstStartedPulling="2026-01-21 15:34:29.873943918 +0000 UTC m=+152.235386327" lastFinishedPulling="2026-01-21 15:35:46.502037435 +0000 UTC m=+228.863479844" observedRunningTime="2026-01-21 15:35:47.653160936 +0000 UTC m=+230.014603345" watchObservedRunningTime="2026-01-21 15:35:47.653598356 +0000 UTC m=+230.015040765" Jan 21 15:35:48 crc kubenswrapper[4890]: I0121 15:35:48.599442 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:35:48 crc kubenswrapper[4890]: I0121 15:35:48.599856 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:35:48 crc kubenswrapper[4890]: I0121 15:35:48.642944 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:35:48 crc kubenswrapper[4890]: I0121 15:35:48.643000 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:35:49 crc kubenswrapper[4890]: I0121 15:35:49.559110 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdj8x" event={"ID":"37c92d1b-6b73-4c8f-b5f5-39062afd3003","Type":"ContainerStarted","Data":"6666b6dd6d54d6932c534123d2ca6870c59bd73fcfe3ec6f1b386369315ef945"} Jan 21 15:35:49 crc kubenswrapper[4890]: I0121 15:35:49.561146 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vt8l" event={"ID":"b30c6789-488c-4191-bbb2-24ff82f8c648","Type":"ContainerStarted","Data":"38063aaa80cbff2685b86feccf87dada6bb2c6f34c03ce1dd5f30bd9cf9ca150"} Jan 21 15:35:49 crc kubenswrapper[4890]: I0121 15:35:49.583892 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vdj8x" podStartSLOduration=4.376050922 podStartE2EDuration="1m24.583871791s" podCreationTimestamp="2026-01-21 15:34:25 +0000 UTC" firstStartedPulling="2026-01-21 15:34:28.697457575 +0000 UTC m=+151.058899974" lastFinishedPulling="2026-01-21 15:35:48.905278434 +0000 UTC m=+231.266720843" observedRunningTime="2026-01-21 15:35:49.577108059 +0000 UTC m=+231.938550468" watchObservedRunningTime="2026-01-21 15:35:49.583871791 +0000 UTC m=+231.945314200" Jan 21 15:35:49 crc kubenswrapper[4890]: I0121 15:35:49.600162 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5vt8l" podStartSLOduration=3.534062897 podStartE2EDuration="1m22.600144249s" podCreationTimestamp="2026-01-21 15:34:27 +0000 UTC" firstStartedPulling="2026-01-21 15:34:29.871228136 +0000 UTC m=+152.232670545" lastFinishedPulling="2026-01-21 15:35:48.937309488 +0000 UTC m=+231.298751897" observedRunningTime="2026-01-21 15:35:49.599495073 +0000 UTC m=+231.960937512" watchObservedRunningTime="2026-01-21 15:35:49.600144249 +0000 UTC m=+231.961586668" Jan 21 15:35:49 crc kubenswrapper[4890]: I0121 15:35:49.653783 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9pcl" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:49 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:49 crc kubenswrapper[4890]: > Jan 21 15:35:49 crc kubenswrapper[4890]: I0121 15:35:49.702302 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4g6l9" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:49 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:49 crc kubenswrapper[4890]: > Jan 21 15:35:55 crc kubenswrapper[4890]: I0121 15:35:55.646106 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:35:55 crc kubenswrapper[4890]: I0121 15:35:55.646835 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:35:55 crc kubenswrapper[4890]: I0121 15:35:55.670160 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:35:55 crc kubenswrapper[4890]: I0121 15:35:55.670257 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:35:55 crc kubenswrapper[4890]: I0121 15:35:55.713622 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:35:55 crc kubenswrapper[4890]: I0121 15:35:55.728849 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.047764 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.047837 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.123795 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.202377 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" podUID="d5324902-a12c-492c-b66c-29c0b27d84cf" containerName="oauth-openshift" containerID="cri-o://9c255ba7c7f406fead97d1aec53ed4a2109121a5c8160dd62777a5755f5b6ace" gracePeriod=15 Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.220837 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.220919 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.291134 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:35:56 crc kubenswrapper[4890]: I0121 15:35:56.666466 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:35:57 crc kubenswrapper[4890]: I0121 15:35:57.570227 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:35:57 crc kubenswrapper[4890]: I0121 15:35:57.570922 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:35:57 crc kubenswrapper[4890]: I0121 15:35:57.639843 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:35:57 crc kubenswrapper[4890]: I0121 15:35:57.647646 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-56k6w" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:57 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:57 crc kubenswrapper[4890]: > Jan 21 15:35:57 crc kubenswrapper[4890]: I0121 15:35:57.673435 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-kxtgf" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:57 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:57 crc kubenswrapper[4890]: > Jan 21 15:35:57 crc kubenswrapper[4890]: I0121 15:35:57.676125 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-vdj8x" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:57 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:57 crc kubenswrapper[4890]: > Jan 21 15:35:58 crc kubenswrapper[4890]: I0121 15:35:58.682342 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:35:58 crc kubenswrapper[4890]: I0121 15:35:58.717690 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:35:58 crc kubenswrapper[4890]: I0121 15:35:58.765068 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:35:59 crc kubenswrapper[4890]: I0121 15:35:59.645980 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9pcl" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:59 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:59 crc kubenswrapper[4890]: > Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.277528 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-558db77b4-mwk8l_d5324902-a12c-492c-b66c-29c0b27d84cf/oauth-openshift/0.log" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.277615 4890 generic.go:334] "Generic (PLEG): container finished" podID="d5324902-a12c-492c-b66c-29c0b27d84cf" containerID="9c255ba7c7f406fead97d1aec53ed4a2109121a5c8160dd62777a5755f5b6ace" exitCode=-1 Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.277685 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" event={"ID":"d5324902-a12c-492c-b66c-29c0b27d84cf","Type":"ContainerDied","Data":"9c255ba7c7f406fead97d1aec53ed4a2109121a5c8160dd62777a5755f5b6ace"} Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.491878 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.532567 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-qgg8k"] Jan 21 15:36:05 crc kubenswrapper[4890]: E0121 15:36:05.532874 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c085a2cf-defc-45aa-97aa-702eaae200a2" containerName="pruner" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.532895 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c085a2cf-defc-45aa-97aa-702eaae200a2" containerName="pruner" Jan 21 15:36:05 crc kubenswrapper[4890]: E0121 15:36:05.532919 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="extract-content" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.532933 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="extract-content" Jan 21 15:36:05 crc kubenswrapper[4890]: E0121 15:36:05.532953 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="extract-utilities" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.532969 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="extract-utilities" Jan 21 15:36:05 crc kubenswrapper[4890]: E0121 15:36:05.532993 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="registry-server" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.533006 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="registry-server" Jan 21 15:36:05 crc kubenswrapper[4890]: E0121 15:36:05.533031 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5324902-a12c-492c-b66c-29c0b27d84cf" containerName="oauth-openshift" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.533044 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5324902-a12c-492c-b66c-29c0b27d84cf" containerName="oauth-openshift" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.533222 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2029ad20-8b9d-47d2-b9a2-37bfe9887f44" containerName="registry-server" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.533253 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5324902-a12c-492c-b66c-29c0b27d84cf" containerName="oauth-openshift" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.533275 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="c085a2cf-defc-45aa-97aa-702eaae200a2" containerName="pruner" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.533900 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.560056 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-qgg8k"] Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594386 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqrcv\" (UniqueName: \"kubernetes.io/projected/d5324902-a12c-492c-b66c-29c0b27d84cf-kube-api-access-mqrcv\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594485 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-trusted-ca-bundle\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594530 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-error\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594578 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-login\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594634 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-router-certs\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594657 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-dir\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594684 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-serving-cert\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594731 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-provider-selection\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594766 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-policies\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594811 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-ocp-branding-template\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594833 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-idp-0-file-data\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594874 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-service-ca\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594900 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-session\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.594921 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-cliconfig\") pod \"d5324902-a12c-492c-b66c-29c0b27d84cf\" (UID: \"d5324902-a12c-492c-b66c-29c0b27d84cf\") " Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.595091 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.595156 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vm8\" (UniqueName: \"kubernetes.io/projected/04df71c6-bda9-41b5-8c5b-62294307db64-kube-api-access-r2vm8\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.595379 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.595440 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.595505 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596420 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04df71c6-bda9-41b5-8c5b-62294307db64-audit-dir\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596443 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596500 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-audit-policies\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596570 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596649 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596682 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596701 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596831 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596880 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596914 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.596953 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.597046 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.597117 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.597139 4890 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.597153 4890 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.597536 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.597755 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.601289 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.601824 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.601874 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5324902-a12c-492c-b66c-29c0b27d84cf-kube-api-access-mqrcv" (OuterVolumeSpecName: "kube-api-access-mqrcv") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "kube-api-access-mqrcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.602844 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.606901 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.607331 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.607730 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.607937 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.608180 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d5324902-a12c-492c-b66c-29c0b27d84cf" (UID: "d5324902-a12c-492c-b66c-29c0b27d84cf"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.621655 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4g6l9"] Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.621906 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4g6l9" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="registry-server" containerID="cri-o://e5f5a081503d56e4e1604451fd9d63cd4f5537dd8b47332131e4c1d8c96ee49b" gracePeriod=2 Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.690932 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.698785 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.698839 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.698863 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.698903 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.698920 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.698944 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.698964 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699003 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699024 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vm8\" (UniqueName: \"kubernetes.io/projected/04df71c6-bda9-41b5-8c5b-62294307db64-kube-api-access-r2vm8\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699049 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699072 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699098 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699119 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04df71c6-bda9-41b5-8c5b-62294307db64-audit-dir\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699141 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-audit-policies\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699187 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqrcv\" (UniqueName: \"kubernetes.io/projected/d5324902-a12c-492c-b66c-29c0b27d84cf-kube-api-access-mqrcv\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699200 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699211 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699222 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699233 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699246 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699258 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699268 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699279 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699290 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.699299 4890 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5324902-a12c-492c-b66c-29c0b27d84cf-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.700209 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-audit-policies\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.703255 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.704270 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.704513 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.705513 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.706044 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.706199 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.706300 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.707195 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04df71c6-bda9-41b5-8c5b-62294307db64-audit-dir\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.707408 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.709067 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.709111 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.710698 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04df71c6-bda9-41b5-8c5b-62294307db64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.728121 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:36:05 crc kubenswrapper[4890]: I0121 15:36:05.900683 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vm8\" (UniqueName: \"kubernetes.io/projected/04df71c6-bda9-41b5-8c5b-62294307db64-kube-api-access-r2vm8\") pod \"oauth-openshift-6b5f774455-qgg8k\" (UID: \"04df71c6-bda9-41b5-8c5b-62294307db64\") " pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.161770 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.298040 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.304803 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.304775 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwk8l" event={"ID":"d5324902-a12c-492c-b66c-29c0b27d84cf","Type":"ContainerDied","Data":"8b3b0acc2d5a6e6183167eb590e02c110d6fabac1a3f7fc44784c411f5e5cbf1"} Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.304892 4890 scope.go:117] "RemoveContainer" containerID="9c255ba7c7f406fead97d1aec53ed4a2109121a5c8160dd62777a5755f5b6ace" Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.308477 4890 generic.go:334] "Generic (PLEG): container finished" podID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerID="e5f5a081503d56e4e1604451fd9d63cd4f5537dd8b47332131e4c1d8c96ee49b" exitCode=0 Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.308515 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g6l9" event={"ID":"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6","Type":"ContainerDied","Data":"e5f5a081503d56e4e1604451fd9d63cd4f5537dd8b47332131e4c1d8c96ee49b"} Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.338096 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwk8l"] Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.345472 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwk8l"] Jan 21 15:36:06 crc kubenswrapper[4890]: I0121 15:36:06.622821 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-qgg8k"] Jan 21 15:36:06 crc kubenswrapper[4890]: W0121 15:36:06.633925 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04df71c6_bda9_41b5_8c5b_62294307db64.slice/crio-a980b9189fdc73294685902809fb4fc21f0e60c19402cd1154cd431854061ed8 WatchSource:0}: Error finding container a980b9189fdc73294685902809fb4fc21f0e60c19402cd1154cd431854061ed8: Status 404 returned error can't find the container with id a980b9189fdc73294685902809fb4fc21f0e60c19402cd1154cd431854061ed8 Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.313969 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" event={"ID":"04df71c6-bda9-41b5-8c5b-62294307db64","Type":"ContainerStarted","Data":"a980b9189fdc73294685902809fb4fc21f0e60c19402cd1154cd431854061ed8"} Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.316899 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g6l9" event={"ID":"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6","Type":"ContainerDied","Data":"98bac27d43abd2bb7386bac42e1aa417819156b77d6cb47f8e94d4ca62895cc5"} Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.316924 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98bac27d43abd2bb7386bac42e1aa417819156b77d6cb47f8e94d4ca62895cc5" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.333124 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.423406 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-utilities\") pod \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.423502 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-catalog-content\") pod \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.423625 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtq4g\" (UniqueName: \"kubernetes.io/projected/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-kube-api-access-rtq4g\") pod \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\" (UID: \"e5b533fe-8e5c-45c4-8168-fa7a5fb323f6\") " Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.424084 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-utilities" (OuterVolumeSpecName: "utilities") pod "e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" (UID: "e5b533fe-8e5c-45c4-8168-fa7a5fb323f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.424196 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.430009 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-kube-api-access-rtq4g" (OuterVolumeSpecName: "kube-api-access-rtq4g") pod "e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" (UID: "e5b533fe-8e5c-45c4-8168-fa7a5fb323f6"). InnerVolumeSpecName "kube-api-access-rtq4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.526754 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtq4g\" (UniqueName: \"kubernetes.io/projected/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-kube-api-access-rtq4g\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.728275 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" (UID: "e5b533fe-8e5c-45c4-8168-fa7a5fb323f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.728960 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.824684 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxtgf"] Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.825017 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kxtgf" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="registry-server" containerID="cri-o://c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678" gracePeriod=2 Jan 21 15:36:07 crc kubenswrapper[4890]: I0121 15:36:07.922282 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5324902-a12c-492c-b66c-29c0b27d84cf" path="/var/lib/kubelet/pods/d5324902-a12c-492c-b66c-29c0b27d84cf/volumes" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.023571 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xj52j"] Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.024167 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xj52j" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="registry-server" containerID="cri-o://2accf25d3f4fb1076c0981f8104f2a65732cb74de7dc36d86ff38d999fc6f564" gracePeriod=2 Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.207746 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.240598 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-utilities\") pod \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.240704 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccg5\" (UniqueName: \"kubernetes.io/projected/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-kube-api-access-6ccg5\") pod \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.240735 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-catalog-content\") pod \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\" (UID: \"35199ad9-530d-4bc4-b3bb-6b89cc5c477e\") " Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.241416 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-utilities" (OuterVolumeSpecName: "utilities") pod "35199ad9-530d-4bc4-b3bb-6b89cc5c477e" (UID: "35199ad9-530d-4bc4-b3bb-6b89cc5c477e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.246247 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-kube-api-access-6ccg5" (OuterVolumeSpecName: "kube-api-access-6ccg5") pod "35199ad9-530d-4bc4-b3bb-6b89cc5c477e" (UID: "35199ad9-530d-4bc4-b3bb-6b89cc5c477e"). InnerVolumeSpecName "kube-api-access-6ccg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.249930 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.250000 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccg5\" (UniqueName: \"kubernetes.io/projected/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-kube-api-access-6ccg5\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.294496 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35199ad9-530d-4bc4-b3bb-6b89cc5c477e" (UID: "35199ad9-530d-4bc4-b3bb-6b89cc5c477e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.326243 4890 generic.go:334] "Generic (PLEG): container finished" podID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerID="2accf25d3f4fb1076c0981f8104f2a65732cb74de7dc36d86ff38d999fc6f564" exitCode=0 Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.326480 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xj52j" event={"ID":"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219","Type":"ContainerDied","Data":"2accf25d3f4fb1076c0981f8104f2a65732cb74de7dc36d86ff38d999fc6f564"} Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.329301 4890 generic.go:334] "Generic (PLEG): container finished" podID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerID="c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678" exitCode=0 Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.329363 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxtgf" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.329401 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxtgf" event={"ID":"35199ad9-530d-4bc4-b3bb-6b89cc5c477e","Type":"ContainerDied","Data":"c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678"} Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.329490 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxtgf" event={"ID":"35199ad9-530d-4bc4-b3bb-6b89cc5c477e","Type":"ContainerDied","Data":"960054e0bdc95b7901b29d4e028a63e167f381131229df8ee499458f2d498e64"} Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.329527 4890 scope.go:117] "RemoveContainer" containerID="c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.334231 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g6l9" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.334504 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" event={"ID":"04df71c6-bda9-41b5-8c5b-62294307db64","Type":"ContainerStarted","Data":"732c8d652888e834cf46dbd8a0074ae4bfceebe8846eca00a2280d35fd443829"} Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.334825 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.340304 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.356537 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35199ad9-530d-4bc4-b3bb-6b89cc5c477e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.368068 4890 scope.go:117] "RemoveContainer" containerID="6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.368778 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b5f774455-qgg8k" podStartSLOduration=37.368757254 podStartE2EDuration="37.368757254s" podCreationTimestamp="2026-01-21 15:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:36:08.360679872 +0000 UTC m=+250.722122291" watchObservedRunningTime="2026-01-21 15:36:08.368757254 +0000 UTC m=+250.730199663" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.387690 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4g6l9"] Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.391459 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4g6l9"] Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.396820 4890 scope.go:117] "RemoveContainer" containerID="ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.427422 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxtgf"] Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.429473 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kxtgf"] Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.449235 4890 scope.go:117] "RemoveContainer" containerID="c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678" Jan 21 15:36:08 crc kubenswrapper[4890]: E0121 15:36:08.454771 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678\": container with ID starting with c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678 not found: ID does not exist" containerID="c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.454818 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678"} err="failed to get container status \"c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678\": rpc error: code = NotFound desc = could not find container \"c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678\": container with ID starting with c7292918096f16db1e0bf3ebb109d35561f347816ace00aaff24ae1d026db678 not found: ID does not exist" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.454847 4890 scope.go:117] "RemoveContainer" containerID="6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2" Jan 21 15:36:08 crc kubenswrapper[4890]: E0121 15:36:08.455249 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2\": container with ID starting with 6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2 not found: ID does not exist" containerID="6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.455280 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2"} err="failed to get container status \"6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2\": rpc error: code = NotFound desc = could not find container \"6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2\": container with ID starting with 6d8f41ae8bc4f386e778906277c7188b5a8d96bc8afcf1f089e346e8950250c2 not found: ID does not exist" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.455299 4890 scope.go:117] "RemoveContainer" containerID="ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3" Jan 21 15:36:08 crc kubenswrapper[4890]: E0121 15:36:08.455548 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3\": container with ID starting with ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3 not found: ID does not exist" containerID="ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.455573 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3"} err="failed to get container status \"ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3\": rpc error: code = NotFound desc = could not find container \"ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3\": container with ID starting with ae1832bc1a9f879d75155a3cdee73a63b77dd19a6773050683560c7bd3ad7fb3 not found: ID does not exist" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.512833 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.562844 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-utilities\") pod \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.562915 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-catalog-content\") pod \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.562940 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2x9l\" (UniqueName: \"kubernetes.io/projected/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-kube-api-access-g2x9l\") pod \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\" (UID: \"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219\") " Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.563802 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-utilities" (OuterVolumeSpecName: "utilities") pod "09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" (UID: "09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.565982 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-kube-api-access-g2x9l" (OuterVolumeSpecName: "kube-api-access-g2x9l") pod "09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" (UID: "09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219"). InnerVolumeSpecName "kube-api-access-g2x9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.624735 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" (UID: "09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.641233 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.664635 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.664916 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.664927 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2x9l\" (UniqueName: \"kubernetes.io/projected/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219-kube-api-access-g2x9l\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:08 crc kubenswrapper[4890]: I0121 15:36:08.680625 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.344667 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xj52j" event={"ID":"09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219","Type":"ContainerDied","Data":"e74f520bea38eceb479c982b846dddaba4f9484640a710ea33bd81931a82fb0e"} Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.344774 4890 scope.go:117] "RemoveContainer" containerID="2accf25d3f4fb1076c0981f8104f2a65732cb74de7dc36d86ff38d999fc6f564" Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.346120 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xj52j" Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.361182 4890 scope.go:117] "RemoveContainer" containerID="73a03f3e105426264219a66e4d16997464f3612c1dee3ba97c5062a8ba018b13" Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.389589 4890 scope.go:117] "RemoveContainer" containerID="a8f192b70a8584941bd3e66ad75a1b5f0eccc832b20f8abdd724c18ed4c36c46" Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.424030 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xj52j"] Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.430131 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xj52j"] Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.925930 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" path="/var/lib/kubelet/pods/09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219/volumes" Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.927566 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" path="/var/lib/kubelet/pods/35199ad9-530d-4bc4-b3bb-6b89cc5c477e/volumes" Jan 21 15:36:09 crc kubenswrapper[4890]: I0121 15:36:09.929057 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" path="/var/lib/kubelet/pods/e5b533fe-8e5c-45c4-8168-fa7a5fb323f6/volumes" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424336 4890 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424698 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="extract-utilities" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424720 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="extract-utilities" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424739 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="extract-content" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424751 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="extract-content" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424768 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424781 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424803 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="extract-utilities" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424817 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="extract-utilities" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424904 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424918 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424937 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424949 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424962 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="extract-content" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.424973 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="extract-content" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.424995 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="extract-utilities" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.425011 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="extract-utilities" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.425048 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="extract-content" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.425072 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="extract-content" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.425248 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="35199ad9-530d-4bc4-b3bb-6b89cc5c477e" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.425270 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bdd9b6-adf1-4e6b-bb1a-0d1cd124a219" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.425285 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b533fe-8e5c-45c4-8168-fa7a5fb323f6" containerName="registry-server" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.425988 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.426515 4890 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.426931 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9" gracePeriod=15 Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.427200 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56" gracePeriod=15 Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.427312 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f" gracePeriod=15 Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.427431 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474" gracePeriod=15 Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.427498 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad" gracePeriod=15 Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430009 4890 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.430469 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430507 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.430535 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430553 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.430574 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430592 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.430619 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430636 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.430668 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430684 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.430707 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430725 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430966 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.430992 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.431021 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.431043 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.432437 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.432599 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.432610 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.432740 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.472151 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487178 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487241 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487307 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487338 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487390 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487577 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487672 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.487716 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.513049 4890 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589172 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589535 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589563 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589593 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589611 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589660 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589682 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589704 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589781 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589326 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589834 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589859 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589883 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589904 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.589925 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.590042 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.674396 4890 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.674453 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 21 15:36:10 crc kubenswrapper[4890]: I0121 15:36:10.766815 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:10 crc kubenswrapper[4890]: W0121 15:36:10.797684 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6d509db5ecbbb16ee37e116294db3c072bc5474a0bafcfeba0d3866bff891baa WatchSource:0}: Error finding container 6d509db5ecbbb16ee37e116294db3c072bc5474a0bafcfeba0d3866bff891baa: Status 404 returned error can't find the container with id 6d509db5ecbbb16ee37e116294db3c072bc5474a0bafcfeba0d3866bff891baa Jan 21 15:36:10 crc kubenswrapper[4890]: E0121 15:36:10.802004 4890 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.2:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc90390ccc993 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:36:10.801187219 +0000 UTC m=+253.162629658,LastTimestamp:2026-01-21 15:36:10.801187219 +0000 UTC m=+253.162629658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.365555 4890 generic.go:334] "Generic (PLEG): container finished" podID="4b04bfe8-0704-492a-a823-8defb73acbd7" containerID="91e3461e8cda138de9ba80e9d2f1e2a99b1e05400332ae12c4669aa4e069b94a" exitCode=0 Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.365701 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b04bfe8-0704-492a-a823-8defb73acbd7","Type":"ContainerDied","Data":"91e3461e8cda138de9ba80e9d2f1e2a99b1e05400332ae12c4669aa4e069b94a"} Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.367696 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.368330 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.368709 4890 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.371284 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.374511 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.375726 4890 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56" exitCode=0 Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.375835 4890 scope.go:117] "RemoveContainer" containerID="15541f29edbac0e022971ac0a554652509d7a28e9e7bfb02e35cb43efc4b62d5" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.375949 4890 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f" exitCode=0 Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.376125 4890 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474" exitCode=0 Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.376154 4890 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad" exitCode=2 Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.378691 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669"} Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.380543 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6d509db5ecbbb16ee37e116294db3c072bc5474a0bafcfeba0d3866bff891baa"} Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.380475 4890 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.381531 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:11 crc kubenswrapper[4890]: I0121 15:36:11.382584 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:12 crc kubenswrapper[4890]: E0121 15:36:12.043855 4890 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.2:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc90390ccc993 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:36:10.801187219 +0000 UTC m=+253.162629658,LastTimestamp:2026-01-21 15:36:10.801187219 +0000 UTC m=+253.162629658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.390125 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.803006 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.803658 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.804036 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.809268 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.810050 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.810722 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.811107 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.811494 4890 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.821865 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-var-lock\") pod \"4b04bfe8-0704-492a-a823-8defb73acbd7\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.821930 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-kubelet-dir\") pod \"4b04bfe8-0704-492a-a823-8defb73acbd7\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.821992 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b04bfe8-0704-492a-a823-8defb73acbd7-kube-api-access\") pod \"4b04bfe8-0704-492a-a823-8defb73acbd7\" (UID: \"4b04bfe8-0704-492a-a823-8defb73acbd7\") " Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.822001 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-var-lock" (OuterVolumeSpecName: "var-lock") pod "4b04bfe8-0704-492a-a823-8defb73acbd7" (UID: "4b04bfe8-0704-492a-a823-8defb73acbd7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.822039 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b04bfe8-0704-492a-a823-8defb73acbd7" (UID: "4b04bfe8-0704-492a-a823-8defb73acbd7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.822230 4890 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.822242 4890 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b04bfe8-0704-492a-a823-8defb73acbd7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.827796 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b04bfe8-0704-492a-a823-8defb73acbd7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b04bfe8-0704-492a-a823-8defb73acbd7" (UID: "4b04bfe8-0704-492a-a823-8defb73acbd7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.923444 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.923563 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.923599 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.923556 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.923595 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.923758 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.924035 4890 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.924058 4890 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.924075 4890 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:12 crc kubenswrapper[4890]: I0121 15:36:12.924093 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b04bfe8-0704-492a-a823-8defb73acbd7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.400446 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b04bfe8-0704-492a-a823-8defb73acbd7","Type":"ContainerDied","Data":"74672de2b266f28ec2f46972b5eb61d27256239a2844b43f3ea0ef36c204ed04"} Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.400826 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74672de2b266f28ec2f46972b5eb61d27256239a2844b43f3ea0ef36c204ed04" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.400482 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.406036 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.407448 4890 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9" exitCode=0 Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.407522 4890 scope.go:117] "RemoveContainer" containerID="1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.407577 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.430199 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.430758 4890 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.431601 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.434667 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.435130 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.435578 4890 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.438106 4890 scope.go:117] "RemoveContainer" containerID="be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.461712 4890 scope.go:117] "RemoveContainer" containerID="3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.490793 4890 scope.go:117] "RemoveContainer" containerID="ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.510413 4890 scope.go:117] "RemoveContainer" containerID="0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.524591 4890 scope.go:117] "RemoveContainer" containerID="7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.548981 4890 scope.go:117] "RemoveContainer" containerID="1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.549471 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\": container with ID starting with 1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56 not found: ID does not exist" containerID="1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.549535 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56"} err="failed to get container status \"1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\": rpc error: code = NotFound desc = could not find container \"1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56\": container with ID starting with 1f35cc21bbfb1e1123e14cd213cdbb0edf5a4e4180479e732261161fa3984c56 not found: ID does not exist" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.549574 4890 scope.go:117] "RemoveContainer" containerID="be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.549902 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\": container with ID starting with be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f not found: ID does not exist" containerID="be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.549946 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f"} err="failed to get container status \"be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\": rpc error: code = NotFound desc = could not find container \"be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f\": container with ID starting with be08bccfa48f4c51a6c34d020e8b58cba16be7cd10768421b3d77d5e4cd2ec3f not found: ID does not exist" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.549971 4890 scope.go:117] "RemoveContainer" containerID="3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.550286 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\": container with ID starting with 3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474 not found: ID does not exist" containerID="3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.550320 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474"} err="failed to get container status \"3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\": rpc error: code = NotFound desc = could not find container \"3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474\": container with ID starting with 3f99e9a150331a7e026c1ccbee6ee906128a202d486b9599908fb3e8a0676474 not found: ID does not exist" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.550346 4890 scope.go:117] "RemoveContainer" containerID="ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.550930 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\": container with ID starting with ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad not found: ID does not exist" containerID="ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.550953 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad"} err="failed to get container status \"ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\": rpc error: code = NotFound desc = could not find container \"ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad\": container with ID starting with ab9bb1b8ae5be6d269420d6160e4d69684a8a53e809e93778df1d47b3bfbcfad not found: ID does not exist" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.550966 4890 scope.go:117] "RemoveContainer" containerID="0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.551248 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\": container with ID starting with 0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9 not found: ID does not exist" containerID="0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.551287 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9"} err="failed to get container status \"0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\": rpc error: code = NotFound desc = could not find container \"0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9\": container with ID starting with 0544fcd5fa2640424085968a1948e6455b3237c76f1b513e7503c1701ab4c6f9 not found: ID does not exist" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.551310 4890 scope.go:117] "RemoveContainer" containerID="7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.551808 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\": container with ID starting with 7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede not found: ID does not exist" containerID="7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.551857 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede"} err="failed to get container status \"7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\": rpc error: code = NotFound desc = could not find container \"7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede\": container with ID starting with 7575a2f654861b487ff0b488e80cc83a3d0930835fc390cc7fd2a1ef1a324ede not found: ID does not exist" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.846889 4890 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.847979 4890 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.848962 4890 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.849994 4890 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.850487 4890 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.850605 4890 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 15:36:13 crc kubenswrapper[4890]: E0121 15:36:13.851032 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="200ms" Jan 21 15:36:13 crc kubenswrapper[4890]: I0121 15:36:13.923687 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 15:36:14 crc kubenswrapper[4890]: E0121 15:36:14.052069 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="400ms" Jan 21 15:36:14 crc kubenswrapper[4890]: E0121 15:36:14.452945 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="800ms" Jan 21 15:36:15 crc kubenswrapper[4890]: E0121 15:36:15.253561 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="1.6s" Jan 21 15:36:16 crc kubenswrapper[4890]: E0121 15:36:16.855116 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="3.2s" Jan 21 15:36:17 crc kubenswrapper[4890]: I0121 15:36:17.919495 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:17 crc kubenswrapper[4890]: I0121 15:36:17.919838 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:20 crc kubenswrapper[4890]: E0121 15:36:20.056643 4890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.2:6443: connect: connection refused" interval="6.4s" Jan 21 15:36:20 crc kubenswrapper[4890]: I0121 15:36:20.913421 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:20 crc kubenswrapper[4890]: I0121 15:36:20.914853 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:20 crc kubenswrapper[4890]: I0121 15:36:20.915932 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:20 crc kubenswrapper[4890]: I0121 15:36:20.934191 4890 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:20 crc kubenswrapper[4890]: I0121 15:36:20.934240 4890 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:20 crc kubenswrapper[4890]: E0121 15:36:20.935324 4890 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:20 crc kubenswrapper[4890]: I0121 15:36:20.937025 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:21 crc kubenswrapper[4890]: I0121 15:36:21.459782 4890 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6ec989f7661af509fc7935d76dd8722eda80d0d8e28b3d5397dcbf8f291bf7c7" exitCode=0 Jan 21 15:36:21 crc kubenswrapper[4890]: I0121 15:36:21.459859 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6ec989f7661af509fc7935d76dd8722eda80d0d8e28b3d5397dcbf8f291bf7c7"} Jan 21 15:36:21 crc kubenswrapper[4890]: I0121 15:36:21.460036 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b2ca94753abd3bba2de584bdcf90598c9bc1304f3878537e885f890c227b6f7c"} Jan 21 15:36:21 crc kubenswrapper[4890]: I0121 15:36:21.460310 4890 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:21 crc kubenswrapper[4890]: I0121 15:36:21.460325 4890 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:21 crc kubenswrapper[4890]: E0121 15:36:21.460731 4890 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:21 crc kubenswrapper[4890]: I0121 15:36:21.460741 4890 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:21 crc kubenswrapper[4890]: I0121 15:36:21.461095 4890 status_manager.go:851] "Failed to get status for pod" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.2:6443: connect: connection refused" Jan 21 15:36:22 crc kubenswrapper[4890]: I0121 15:36:22.478201 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2705b260f07e43eaae8a75f2237bf54a268f663e3aa6df1f147b9be5d7edbf1e"} Jan 21 15:36:22 crc kubenswrapper[4890]: I0121 15:36:22.478572 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"89131c033a0e38e98a5ebefdf4def9c39fa860cbdbbbeca2dd45f0703b57cf4a"} Jan 21 15:36:22 crc kubenswrapper[4890]: I0121 15:36:22.478592 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d98698d3b1d4a5baf83c1591785afd95233400ba4377de717c7a3649bbf0022a"} Jan 21 15:36:22 crc kubenswrapper[4890]: I0121 15:36:22.478619 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1ea0cce3af32e0eb46ecf0cadc3dc4ec85b7414fb5e037483e99799726414ab1"} Jan 21 15:36:23 crc kubenswrapper[4890]: I0121 15:36:23.490580 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"39d38dac3af0d68025594a88dde9f69b7aa6273ecaf29621d331e3cae4017d63"} Jan 21 15:36:23 crc kubenswrapper[4890]: I0121 15:36:23.490996 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:23 crc kubenswrapper[4890]: I0121 15:36:23.490868 4890 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:23 crc kubenswrapper[4890]: I0121 15:36:23.491028 4890 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:25 crc kubenswrapper[4890]: I0121 15:36:25.937416 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:25 crc kubenswrapper[4890]: I0121 15:36:25.937474 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:25 crc kubenswrapper[4890]: I0121 15:36:25.942803 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:28 crc kubenswrapper[4890]: I0121 15:36:28.501038 4890 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:28 crc kubenswrapper[4890]: I0121 15:36:28.525254 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:36:28 crc kubenswrapper[4890]: I0121 15:36:28.525324 4890 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd" exitCode=1 Jan 21 15:36:28 crc kubenswrapper[4890]: I0121 15:36:28.525381 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd"} Jan 21 15:36:28 crc kubenswrapper[4890]: I0121 15:36:28.525873 4890 scope.go:117] "RemoveContainer" containerID="5cf82a96927b967d79869abf54047748ffba09f5e202c5468e8124d997b34bbd" Jan 21 15:36:28 crc kubenswrapper[4890]: I0121 15:36:28.617226 4890 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1bf8559d-1e98-47a8-adb0-fdccfc219156" Jan 21 15:36:29 crc kubenswrapper[4890]: I0121 15:36:29.533072 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:36:29 crc kubenswrapper[4890]: I0121 15:36:29.533335 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"72a1dfec0e2237ee3d658ac49782a66c7de1ed0c2c8724c98adf2dce93ad45b1"} Jan 21 15:36:29 crc kubenswrapper[4890]: I0121 15:36:29.533547 4890 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:29 crc kubenswrapper[4890]: I0121 15:36:29.533563 4890 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:29 crc kubenswrapper[4890]: I0121 15:36:29.538003 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:29 crc kubenswrapper[4890]: I0121 15:36:29.538579 4890 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1bf8559d-1e98-47a8-adb0-fdccfc219156" Jan 21 15:36:30 crc kubenswrapper[4890]: I0121 15:36:30.541424 4890 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:30 crc kubenswrapper[4890]: I0121 15:36:30.541485 4890 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b64a6fe1-2ef4-4fbb-9cd1-e6a232644494" Jan 21 15:36:30 crc kubenswrapper[4890]: I0121 15:36:30.546559 4890 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1bf8559d-1e98-47a8-adb0-fdccfc219156" Jan 21 15:36:33 crc kubenswrapper[4890]: I0121 15:36:33.212290 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:36:33 crc kubenswrapper[4890]: I0121 15:36:33.218938 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:36:33 crc kubenswrapper[4890]: I0121 15:36:33.556694 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.693674 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.694314 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.844092 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.863998 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.871637 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.891793 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.935456 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:36:34 crc kubenswrapper[4890]: I0121 15:36:34.939597 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.058128 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.092122 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.119253 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.233348 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.410728 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.442227 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.637665 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.645112 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.677969 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.822250 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:36:35 crc kubenswrapper[4890]: I0121 15:36:35.859299 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.172020 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.200247 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.331867 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.343987 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.409724 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.428179 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.540186 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.542671 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.671994 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.693946 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.931184 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:36:36 crc kubenswrapper[4890]: I0121 15:36:36.987338 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:36:37 crc kubenswrapper[4890]: I0121 15:36:37.032952 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:36:37 crc kubenswrapper[4890]: I0121 15:36:37.165583 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:36:37 crc kubenswrapper[4890]: I0121 15:36:37.371126 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:36:37 crc kubenswrapper[4890]: I0121 15:36:37.375710 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:36:37 crc kubenswrapper[4890]: I0121 15:36:37.452903 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:36:37 crc kubenswrapper[4890]: I0121 15:36:37.628619 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.032478 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.077013 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.112082 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.138444 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.202177 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.230634 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.282878 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.390685 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.709471 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:36:38 crc kubenswrapper[4890]: I0121 15:36:38.861990 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.078800 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.302546 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.326203 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.355031 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.553107 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.590281 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.615104 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.621476 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.645962 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.659340 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.719567 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.726557 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.886400 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.927904 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:36:39 crc kubenswrapper[4890]: I0121 15:36:39.984056 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.411406 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.628478 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.706511 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.727832 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.730900 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.774451 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.835663 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.856103 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.923444 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:36:40 crc kubenswrapper[4890]: I0121 15:36:40.964649 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:36:41 crc kubenswrapper[4890]: I0121 15:36:41.048929 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:36:41 crc kubenswrapper[4890]: I0121 15:36:41.163678 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:36:41 crc kubenswrapper[4890]: I0121 15:36:41.173187 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:36:41 crc kubenswrapper[4890]: I0121 15:36:41.214537 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:36:41 crc kubenswrapper[4890]: I0121 15:36:41.805769 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:36:41 crc kubenswrapper[4890]: I0121 15:36:41.905498 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.129537 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.145136 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.248745 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.251689 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.259137 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.402108 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.414042 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.876100 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:36:42 crc kubenswrapper[4890]: I0121 15:36:42.993264 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.025283 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.111953 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.441183 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.628150 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.635777 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.636103 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.659298 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.876448 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.940819 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:36:43 crc kubenswrapper[4890]: I0121 15:36:43.945480 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.100789 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.271131 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.664127 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.748718 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.768888 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.789654 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.797456 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.903097 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.951466 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.964049 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:36:44 crc kubenswrapper[4890]: I0121 15:36:44.986086 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.049986 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.165625 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.215224 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.235745 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.398938 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.417706 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.419454 4890 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.512368 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.536806 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.801415 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.819367 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.823302 4890 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.823463 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.881994 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.940506 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.978575 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.982091 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:36:45 crc kubenswrapper[4890]: I0121 15:36:45.998381 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.040752 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.167469 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.227003 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.228772 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.398179 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.453065 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.454097 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.474094 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.474537 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.486418 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.537728 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.545702 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.602660 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.643073 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.704917 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.735668 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.936245 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:36:46 crc kubenswrapper[4890]: I0121 15:36:46.988318 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.200957 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.251392 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.305210 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.435271 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.531022 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.585658 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.687446 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.728496 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:36:47 crc kubenswrapper[4890]: I0121 15:36:47.903369 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.042432 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.219902 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.221586 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.246092 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.285323 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.456652 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.462805 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.491720 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.554635 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.573607 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.700215 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.767646 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.816271 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.895111 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.950919 4890 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.956098 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.956073205 podStartE2EDuration="38.956073205s" podCreationTimestamp="2026-01-21 15:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:36:28.520898711 +0000 UTC m=+270.882341120" watchObservedRunningTime="2026-01-21 15:36:48.956073205 +0000 UTC m=+291.317515634" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.956664 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.956716 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.960779 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:36:48 crc kubenswrapper[4890]: I0121 15:36:48.982604 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.982585179 podStartE2EDuration="20.982585179s" podCreationTimestamp="2026-01-21 15:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:36:48.981367798 +0000 UTC m=+291.342810217" watchObservedRunningTime="2026-01-21 15:36:48.982585179 +0000 UTC m=+291.344027608" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.015206 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.022092 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.286025 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.311095 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.392018 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.452815 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.488599 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.696865 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.710578 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.807587 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.866935 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.868927 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:36:49 crc kubenswrapper[4890]: I0121 15:36:49.944673 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.035010 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.041066 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.049480 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.152036 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.315195 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.367926 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.395585 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.416197 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.481818 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.541942 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.611459 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.679687 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:36:50 crc kubenswrapper[4890]: I0121 15:36:50.841479 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.023746 4890 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.024058 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669" gracePeriod=5 Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.070748 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.071379 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.101174 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.172781 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.176566 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.323572 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.352536 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.359395 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.394708 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.402944 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.405850 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.545774 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:36:51 crc kubenswrapper[4890]: I0121 15:36:51.710682 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.006720 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.198311 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.360970 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.478132 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.621170 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.703607 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.773006 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.776864 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.838160 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.906840 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:36:52 crc kubenswrapper[4890]: I0121 15:36:52.920539 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.321257 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.352856 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.423709 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.437942 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.580970 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.705950 4890 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.756979 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.760544 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.863188 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:36:53 crc kubenswrapper[4890]: I0121 15:36:53.990272 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.045165 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.097515 4890 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.243969 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.415485 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.473581 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.580340 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.630654 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.783086 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.976067 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:36:54 crc kubenswrapper[4890]: I0121 15:36:54.979698 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:36:55 crc kubenswrapper[4890]: I0121 15:36:55.030661 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:36:55 crc kubenswrapper[4890]: I0121 15:36:55.046870 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:36:55 crc kubenswrapper[4890]: I0121 15:36:55.113487 4890 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:36:55 crc kubenswrapper[4890]: I0121 15:36:55.228996 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:36:55 crc kubenswrapper[4890]: I0121 15:36:55.521506 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.612899 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.613421 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671237 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671440 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671461 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671564 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671563 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671635 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671703 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671737 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.671858 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.672230 4890 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.672251 4890 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.672265 4890 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.672277 4890 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.680838 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.698187 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.698267 4890 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669" exitCode=137 Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.698363 4890 scope.go:117] "RemoveContainer" containerID="5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.698384 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.728135 4890 scope.go:117] "RemoveContainer" containerID="5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669" Jan 21 15:36:56 crc kubenswrapper[4890]: E0121 15:36:56.728875 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669\": container with ID starting with 5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669 not found: ID does not exist" containerID="5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.728913 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669"} err="failed to get container status \"5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669\": rpc error: code = NotFound desc = could not find container \"5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669\": container with ID starting with 5da1d81616c5db2c79d2063cd8196cc6d1bc7b57523fd008b84311f2666df669 not found: ID does not exist" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.731322 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.749275 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.773446 4890 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:36:56 crc kubenswrapper[4890]: I0121 15:36:56.970293 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.054864 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.160284 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.546924 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.585708 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.602152 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.775942 4890 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.894872 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.923611 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.924314 4890 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.946397 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.946470 4890 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="206ac689-6e7d-4d27-9c52-8b7b9e31ffce" Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.957544 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:36:57 crc kubenswrapper[4890]: I0121 15:36:57.957594 4890 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="206ac689-6e7d-4d27-9c52-8b7b9e31ffce" Jan 21 15:36:58 crc kubenswrapper[4890]: I0121 15:36:58.798441 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.314333 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfnt6"] Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.315536 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" podUID="2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" containerName="controller-manager" containerID="cri-o://490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925" gracePeriod=30 Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.421426 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt"] Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.422031 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" podUID="f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" containerName="route-controller-manager" containerID="cri-o://e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2" gracePeriod=30 Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.671209 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.763141 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.799531 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-client-ca\") pod \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.799706 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcbcf\" (UniqueName: \"kubernetes.io/projected/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-kube-api-access-lcbcf\") pod \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.799786 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-serving-cert\") pod \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.799855 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-config\") pod \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.799938 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-proxy-ca-bundles\") pod \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\" (UID: \"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.801096 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" (UID: "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.801111 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" (UID: "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.801262 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-config" (OuterVolumeSpecName: "config") pod "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" (UID: "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.808850 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-kube-api-access-lcbcf" (OuterVolumeSpecName: "kube-api-access-lcbcf") pod "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" (UID: "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4"). InnerVolumeSpecName "kube-api-access-lcbcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.808926 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" (UID: "2b1b7e60-b325-4424-900c-1d1d5b0cd7e4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.884770 4890 generic.go:334] "Generic (PLEG): container finished" podID="f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" containerID="e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2" exitCode=0 Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.884802 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.884819 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" event={"ID":"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3","Type":"ContainerDied","Data":"e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2"} Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.885631 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt" event={"ID":"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3","Type":"ContainerDied","Data":"dee814e355b5a28e436968cc8c260c6cf7f38d3a928a5fced93eff27a229e0d9"} Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.885652 4890 scope.go:117] "RemoveContainer" containerID="e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.888553 4890 generic.go:334] "Generic (PLEG): container finished" podID="2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" containerID="490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925" exitCode=0 Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.888583 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" event={"ID":"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4","Type":"ContainerDied","Data":"490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925"} Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.888638 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" event={"ID":"2b1b7e60-b325-4424-900c-1d1d5b0cd7e4","Type":"ContainerDied","Data":"15e271db14de1ccd446b2d95a7751da5ef3a9d638e2ff993850c7dc19c4997c2"} Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.888719 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfnt6" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905038 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-serving-cert\") pod \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905112 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-config\") pod \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905140 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-client-ca\") pod \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905169 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbbbh\" (UniqueName: \"kubernetes.io/projected/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-kube-api-access-bbbbh\") pod \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\" (UID: \"f98bb88a-cdde-4b2f-90f6-c91ddd6287f3\") " Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905328 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905362 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905375 4890 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905386 4890 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.905397 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcbcf\" (UniqueName: \"kubernetes.io/projected/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4-kube-api-access-lcbcf\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.906305 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-config" (OuterVolumeSpecName: "config") pod "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" (UID: "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.907401 4890 scope.go:117] "RemoveContainer" containerID="e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.907418 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" (UID: "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: E0121 15:37:28.908285 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2\": container with ID starting with e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2 not found: ID does not exist" containerID="e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.908321 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2"} err="failed to get container status \"e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2\": rpc error: code = NotFound desc = could not find container \"e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2\": container with ID starting with e87f4212191ff41d4dd656f906080a04907d40222e9353b66a9ea5beca7c89b2 not found: ID does not exist" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.908363 4890 scope.go:117] "RemoveContainer" containerID="490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.908749 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-kube-api-access-bbbbh" (OuterVolumeSpecName: "kube-api-access-bbbbh") pod "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" (UID: "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3"). InnerVolumeSpecName "kube-api-access-bbbbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.928279 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfnt6"] Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.931441 4890 scope.go:117] "RemoveContainer" containerID="490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925" Jan 21 15:37:28 crc kubenswrapper[4890]: E0121 15:37:28.932526 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925\": container with ID starting with 490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925 not found: ID does not exist" containerID="490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.932570 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925"} err="failed to get container status \"490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925\": rpc error: code = NotFound desc = could not find container \"490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925\": container with ID starting with 490885a49662ac4bb9610c551781dd7a23be18cdb3061561f6b27af273f19925 not found: ID does not exist" Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.936401 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfnt6"] Jan 21 15:37:28 crc kubenswrapper[4890]: I0121 15:37:28.951090 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" (UID: "f98bb88a-cdde-4b2f-90f6-c91ddd6287f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.005936 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.005973 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.005981 4890 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.005989 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbbbh\" (UniqueName: \"kubernetes.io/projected/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3-kube-api-access-bbbbh\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.220015 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt"] Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.224013 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9dkt"] Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.778923 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm"] Jan 21 15:37:29 crc kubenswrapper[4890]: E0121 15:37:29.779325 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" containerName="route-controller-manager" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779346 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" containerName="route-controller-manager" Jan 21 15:37:29 crc kubenswrapper[4890]: E0121 15:37:29.779412 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" containerName="installer" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779424 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" containerName="installer" Jan 21 15:37:29 crc kubenswrapper[4890]: E0121 15:37:29.779439 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" containerName="controller-manager" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779454 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" containerName="controller-manager" Jan 21 15:37:29 crc kubenswrapper[4890]: E0121 15:37:29.779472 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779484 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779671 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" containerName="route-controller-manager" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779686 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779712 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" containerName="controller-manager" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.779729 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b04bfe8-0704-492a-a823-8defb73acbd7" containerName="installer" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.780310 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.783405 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.783572 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.783610 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.786618 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.786655 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.790506 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86787f5dd8-mt92r"] Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.794201 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.798101 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.802303 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.802979 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.804712 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.806040 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.813964 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.814476 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.835281 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm"] Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.841939 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.846763 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86787f5dd8-mt92r"] Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.922241 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1b7e60-b325-4424-900c-1d1d5b0cd7e4" path="/var/lib/kubelet/pods/2b1b7e60-b325-4424-900c-1d1d5b0cd7e4/volumes" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.923072 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98bb88a-cdde-4b2f-90f6-c91ddd6287f3" path="/var/lib/kubelet/pods/f98bb88a-cdde-4b2f-90f6-c91ddd6287f3/volumes" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.933734 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-config\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.933784 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-config\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.933843 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61072ec3-4b37-46db-8092-0f68536323cf-serving-cert\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.933877 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntgs\" (UniqueName: \"kubernetes.io/projected/61072ec3-4b37-46db-8092-0f68536323cf-kube-api-access-sntgs\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.933935 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-client-ca\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.933993 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-proxy-ca-bundles\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.934042 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-client-ca\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.934093 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2270ab-c573-4a00-8c0e-42e79598fcd6-serving-cert\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:29 crc kubenswrapper[4890]: I0121 15:37:29.934125 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2bw\" (UniqueName: \"kubernetes.io/projected/4d2270ab-c573-4a00-8c0e-42e79598fcd6-kube-api-access-2c2bw\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.035287 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61072ec3-4b37-46db-8092-0f68536323cf-serving-cert\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.035435 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sntgs\" (UniqueName: \"kubernetes.io/projected/61072ec3-4b37-46db-8092-0f68536323cf-kube-api-access-sntgs\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.035546 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-proxy-ca-bundles\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.035591 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-client-ca\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.035646 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-client-ca\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.036798 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-proxy-ca-bundles\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.036828 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-client-ca\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.037959 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2270ab-c573-4a00-8c0e-42e79598fcd6-serving-cert\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.038045 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2bw\" (UniqueName: \"kubernetes.io/projected/4d2270ab-c573-4a00-8c0e-42e79598fcd6-kube-api-access-2c2bw\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.038199 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-config\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.038274 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-config\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.039025 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-client-ca\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.039488 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-config\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.039647 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61072ec3-4b37-46db-8092-0f68536323cf-serving-cert\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.040571 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2270ab-c573-4a00-8c0e-42e79598fcd6-serving-cert\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.044647 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-config\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.053541 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sntgs\" (UniqueName: \"kubernetes.io/projected/61072ec3-4b37-46db-8092-0f68536323cf-kube-api-access-sntgs\") pod \"route-controller-manager-5d74b7c87d-k9wvm\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.070716 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2bw\" (UniqueName: \"kubernetes.io/projected/4d2270ab-c573-4a00-8c0e-42e79598fcd6-kube-api-access-2c2bw\") pod \"controller-manager-86787f5dd8-mt92r\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.120968 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.150601 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.348300 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm"] Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.377270 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86787f5dd8-mt92r"] Jan 21 15:37:30 crc kubenswrapper[4890]: W0121 15:37:30.383107 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d2270ab_c573_4a00_8c0e_42e79598fcd6.slice/crio-fdcfd324956ec8540b3d303562ca0e7214fbcd628ad66e25b7092076066eb43f WatchSource:0}: Error finding container fdcfd324956ec8540b3d303562ca0e7214fbcd628ad66e25b7092076066eb43f: Status 404 returned error can't find the container with id fdcfd324956ec8540b3d303562ca0e7214fbcd628ad66e25b7092076066eb43f Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.904669 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" event={"ID":"61072ec3-4b37-46db-8092-0f68536323cf","Type":"ContainerStarted","Data":"5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408"} Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.906804 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" event={"ID":"61072ec3-4b37-46db-8092-0f68536323cf","Type":"ContainerStarted","Data":"ed28d0f4daef524b6a5f8cbf41988d64ab6fca7e68f4496ff1ca9fd424fccaf4"} Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.906941 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.907056 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" event={"ID":"4d2270ab-c573-4a00-8c0e-42e79598fcd6","Type":"ContainerStarted","Data":"b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121"} Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.907151 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" event={"ID":"4d2270ab-c573-4a00-8c0e-42e79598fcd6","Type":"ContainerStarted","Data":"fdcfd324956ec8540b3d303562ca0e7214fbcd628ad66e25b7092076066eb43f"} Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.907265 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.911089 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.931085 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" podStartSLOduration=2.931061272 podStartE2EDuration="2.931061272s" podCreationTimestamp="2026-01-21 15:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:37:30.928221887 +0000 UTC m=+333.289664296" watchObservedRunningTime="2026-01-21 15:37:30.931061272 +0000 UTC m=+333.292503701" Jan 21 15:37:30 crc kubenswrapper[4890]: I0121 15:37:30.953529 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" podStartSLOduration=2.953503991 podStartE2EDuration="2.953503991s" podCreationTimestamp="2026-01-21 15:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:37:30.949580898 +0000 UTC m=+333.311023307" watchObservedRunningTime="2026-01-21 15:37:30.953503991 +0000 UTC m=+333.314946400" Jan 21 15:37:31 crc kubenswrapper[4890]: I0121 15:37:31.118089 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:37:48 crc kubenswrapper[4890]: I0121 15:37:48.762447 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:37:48 crc kubenswrapper[4890]: I0121 15:37:48.762813 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.276295 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86787f5dd8-mt92r"] Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.277821 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" podUID="4d2270ab-c573-4a00-8c0e-42e79598fcd6" containerName="controller-manager" containerID="cri-o://b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121" gracePeriod=30 Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.292132 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm"] Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.293126 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" podUID="61072ec3-4b37-46db-8092-0f68536323cf" containerName="route-controller-manager" containerID="cri-o://5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408" gracePeriod=30 Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.736999 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.747129 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780568 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c2bw\" (UniqueName: \"kubernetes.io/projected/4d2270ab-c573-4a00-8c0e-42e79598fcd6-kube-api-access-2c2bw\") pod \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780653 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61072ec3-4b37-46db-8092-0f68536323cf-serving-cert\") pod \"61072ec3-4b37-46db-8092-0f68536323cf\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780680 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sntgs\" (UniqueName: \"kubernetes.io/projected/61072ec3-4b37-46db-8092-0f68536323cf-kube-api-access-sntgs\") pod \"61072ec3-4b37-46db-8092-0f68536323cf\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780709 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-config\") pod \"61072ec3-4b37-46db-8092-0f68536323cf\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780739 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-client-ca\") pod \"61072ec3-4b37-46db-8092-0f68536323cf\" (UID: \"61072ec3-4b37-46db-8092-0f68536323cf\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780753 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-client-ca\") pod \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780791 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-proxy-ca-bundles\") pod \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780809 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2270ab-c573-4a00-8c0e-42e79598fcd6-serving-cert\") pod \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.780834 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-config\") pod \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\" (UID: \"4d2270ab-c573-4a00-8c0e-42e79598fcd6\") " Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.782056 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-config" (OuterVolumeSpecName: "config") pod "4d2270ab-c573-4a00-8c0e-42e79598fcd6" (UID: "4d2270ab-c573-4a00-8c0e-42e79598fcd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.782679 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-client-ca" (OuterVolumeSpecName: "client-ca") pod "4d2270ab-c573-4a00-8c0e-42e79598fcd6" (UID: "4d2270ab-c573-4a00-8c0e-42e79598fcd6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.782714 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-client-ca" (OuterVolumeSpecName: "client-ca") pod "61072ec3-4b37-46db-8092-0f68536323cf" (UID: "61072ec3-4b37-46db-8092-0f68536323cf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.783162 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4d2270ab-c573-4a00-8c0e-42e79598fcd6" (UID: "4d2270ab-c573-4a00-8c0e-42e79598fcd6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.785985 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-config" (OuterVolumeSpecName: "config") pod "61072ec3-4b37-46db-8092-0f68536323cf" (UID: "61072ec3-4b37-46db-8092-0f68536323cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.788416 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2270ab-c573-4a00-8c0e-42e79598fcd6-kube-api-access-2c2bw" (OuterVolumeSpecName: "kube-api-access-2c2bw") pod "4d2270ab-c573-4a00-8c0e-42e79598fcd6" (UID: "4d2270ab-c573-4a00-8c0e-42e79598fcd6"). InnerVolumeSpecName "kube-api-access-2c2bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.789147 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2270ab-c573-4a00-8c0e-42e79598fcd6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4d2270ab-c573-4a00-8c0e-42e79598fcd6" (UID: "4d2270ab-c573-4a00-8c0e-42e79598fcd6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.790527 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61072ec3-4b37-46db-8092-0f68536323cf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61072ec3-4b37-46db-8092-0f68536323cf" (UID: "61072ec3-4b37-46db-8092-0f68536323cf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.792544 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61072ec3-4b37-46db-8092-0f68536323cf-kube-api-access-sntgs" (OuterVolumeSpecName: "kube-api-access-sntgs") pod "61072ec3-4b37-46db-8092-0f68536323cf" (UID: "61072ec3-4b37-46db-8092-0f68536323cf"). InnerVolumeSpecName "kube-api-access-sntgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882679 4890 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882765 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2270ab-c573-4a00-8c0e-42e79598fcd6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882780 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882804 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c2bw\" (UniqueName: \"kubernetes.io/projected/4d2270ab-c573-4a00-8c0e-42e79598fcd6-kube-api-access-2c2bw\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882823 4890 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61072ec3-4b37-46db-8092-0f68536323cf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882835 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sntgs\" (UniqueName: \"kubernetes.io/projected/61072ec3-4b37-46db-8092-0f68536323cf-kube-api-access-sntgs\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882848 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882859 4890 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61072ec3-4b37-46db-8092-0f68536323cf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:08 crc kubenswrapper[4890]: I0121 15:38:08.882870 4890 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2270ab-c573-4a00-8c0e-42e79598fcd6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.143950 4890 generic.go:334] "Generic (PLEG): container finished" podID="61072ec3-4b37-46db-8092-0f68536323cf" containerID="5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.144020 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" event={"ID":"61072ec3-4b37-46db-8092-0f68536323cf","Type":"ContainerDied","Data":"5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408"} Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.144596 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" event={"ID":"61072ec3-4b37-46db-8092-0f68536323cf","Type":"ContainerDied","Data":"ed28d0f4daef524b6a5f8cbf41988d64ab6fca7e68f4496ff1ca9fd424fccaf4"} Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.144064 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.144639 4890 scope.go:117] "RemoveContainer" containerID="5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.147683 4890 generic.go:334] "Generic (PLEG): container finished" podID="4d2270ab-c573-4a00-8c0e-42e79598fcd6" containerID="b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.147738 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" event={"ID":"4d2270ab-c573-4a00-8c0e-42e79598fcd6","Type":"ContainerDied","Data":"b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121"} Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.147760 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.147775 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86787f5dd8-mt92r" event={"ID":"4d2270ab-c573-4a00-8c0e-42e79598fcd6","Type":"ContainerDied","Data":"fdcfd324956ec8540b3d303562ca0e7214fbcd628ad66e25b7092076066eb43f"} Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.169278 4890 scope.go:117] "RemoveContainer" containerID="5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408" Jan 21 15:38:09 crc kubenswrapper[4890]: E0121 15:38:09.170150 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408\": container with ID starting with 5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408 not found: ID does not exist" containerID="5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.170205 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408"} err="failed to get container status \"5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408\": rpc error: code = NotFound desc = could not find container \"5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408\": container with ID starting with 5c92dedf7027e7674e410df616a4aaf08419c1421eb92e0b58b23ccffd8ed408 not found: ID does not exist" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.170239 4890 scope.go:117] "RemoveContainer" containerID="b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.187117 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86787f5dd8-mt92r"] Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.191830 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86787f5dd8-mt92r"] Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.200937 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm"] Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.205954 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74b7c87d-k9wvm"] Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.208768 4890 scope.go:117] "RemoveContainer" containerID="b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121" Jan 21 15:38:09 crc kubenswrapper[4890]: E0121 15:38:09.209336 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121\": container with ID starting with b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121 not found: ID does not exist" containerID="b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.209409 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121"} err="failed to get container status \"b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121\": rpc error: code = NotFound desc = could not find container \"b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121\": container with ID starting with b79c56f30bdea3b5bc6cac2ecef02d3cb0989e3ea328510dfe14b17e63a7b121 not found: ID does not exist" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.815370 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-567cc96bf-ps79l"] Jan 21 15:38:09 crc kubenswrapper[4890]: E0121 15:38:09.815695 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2270ab-c573-4a00-8c0e-42e79598fcd6" containerName="controller-manager" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.815713 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2270ab-c573-4a00-8c0e-42e79598fcd6" containerName="controller-manager" Jan 21 15:38:09 crc kubenswrapper[4890]: E0121 15:38:09.815748 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61072ec3-4b37-46db-8092-0f68536323cf" containerName="route-controller-manager" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.815758 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="61072ec3-4b37-46db-8092-0f68536323cf" containerName="route-controller-manager" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.815877 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2270ab-c573-4a00-8c0e-42e79598fcd6" containerName="controller-manager" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.815898 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="61072ec3-4b37-46db-8092-0f68536323cf" containerName="route-controller-manager" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.816398 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.820516 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.830816 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.831140 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.832617 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.833493 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.837718 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.838712 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.838978 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2"] Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.840371 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.843718 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.844081 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.844337 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.851181 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.851842 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.852048 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.854549 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567cc96bf-ps79l"] Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.860146 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2"] Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.930659 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2270ab-c573-4a00-8c0e-42e79598fcd6" path="/var/lib/kubelet/pods/4d2270ab-c573-4a00-8c0e-42e79598fcd6/volumes" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.931295 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61072ec3-4b37-46db-8092-0f68536323cf" path="/var/lib/kubelet/pods/61072ec3-4b37-46db-8092-0f68536323cf/volumes" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996037 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggfs\" (UniqueName: \"kubernetes.io/projected/14a50338-268c-4b48-953b-370e016f329a-kube-api-access-5ggfs\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996086 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d956ad9-0824-48c1-8092-88efe6638df5-client-ca\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996141 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d956ad9-0824-48c1-8092-88efe6638df5-config\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996169 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzmp\" (UniqueName: \"kubernetes.io/projected/4d956ad9-0824-48c1-8092-88efe6638df5-kube-api-access-fqzmp\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996189 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-proxy-ca-bundles\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996210 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14a50338-268c-4b48-953b-370e016f329a-serving-cert\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996235 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d956ad9-0824-48c1-8092-88efe6638df5-serving-cert\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996306 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-client-ca\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:09 crc kubenswrapper[4890]: I0121 15:38:09.996363 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-config\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097235 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ggfs\" (UniqueName: \"kubernetes.io/projected/14a50338-268c-4b48-953b-370e016f329a-kube-api-access-5ggfs\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097280 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d956ad9-0824-48c1-8092-88efe6638df5-client-ca\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097330 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d956ad9-0824-48c1-8092-88efe6638df5-config\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097393 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzmp\" (UniqueName: \"kubernetes.io/projected/4d956ad9-0824-48c1-8092-88efe6638df5-kube-api-access-fqzmp\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097425 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-proxy-ca-bundles\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097452 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14a50338-268c-4b48-953b-370e016f329a-serving-cert\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097481 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d956ad9-0824-48c1-8092-88efe6638df5-serving-cert\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097518 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-client-ca\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.097545 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-config\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.098865 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-client-ca\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.099097 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-proxy-ca-bundles\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.098865 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d956ad9-0824-48c1-8092-88efe6638df5-client-ca\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.099890 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d956ad9-0824-48c1-8092-88efe6638df5-config\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.101973 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14a50338-268c-4b48-953b-370e016f329a-config\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.104717 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14a50338-268c-4b48-953b-370e016f329a-serving-cert\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.104845 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d956ad9-0824-48c1-8092-88efe6638df5-serving-cert\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.112781 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ggfs\" (UniqueName: \"kubernetes.io/projected/14a50338-268c-4b48-953b-370e016f329a-kube-api-access-5ggfs\") pod \"controller-manager-567cc96bf-ps79l\" (UID: \"14a50338-268c-4b48-953b-370e016f329a\") " pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.114298 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzmp\" (UniqueName: \"kubernetes.io/projected/4d956ad9-0824-48c1-8092-88efe6638df5-kube-api-access-fqzmp\") pod \"route-controller-manager-5d9d8d5cd6-2sbv2\" (UID: \"4d956ad9-0824-48c1-8092-88efe6638df5\") " pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.143790 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.163272 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.370523 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2"] Jan 21 15:38:10 crc kubenswrapper[4890]: I0121 15:38:10.580776 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567cc96bf-ps79l"] Jan 21 15:38:11 crc kubenswrapper[4890]: I0121 15:38:11.164252 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" event={"ID":"14a50338-268c-4b48-953b-370e016f329a","Type":"ContainerStarted","Data":"ee0af982e93b75ad217d0da2155e55699d00a1431dfe726c7abf64f5560dc20d"} Jan 21 15:38:11 crc kubenswrapper[4890]: I0121 15:38:11.164794 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" event={"ID":"14a50338-268c-4b48-953b-370e016f329a","Type":"ContainerStarted","Data":"2a0a0d8eb7dc7303122d048fe205b62d0189e4e3f246d10590160eeb19d4737d"} Jan 21 15:38:11 crc kubenswrapper[4890]: I0121 15:38:11.166160 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" event={"ID":"4d956ad9-0824-48c1-8092-88efe6638df5","Type":"ContainerStarted","Data":"f682e52789e08cfdaa542b57531e37a11684e2e2dcb39bbf2c59b6dd3b09bc23"} Jan 21 15:38:11 crc kubenswrapper[4890]: I0121 15:38:11.166208 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" event={"ID":"4d956ad9-0824-48c1-8092-88efe6638df5","Type":"ContainerStarted","Data":"552f95b1a7cd4d6954bb462fdd4984df3efeb26d975d6a5976db884c6f08677d"} Jan 21 15:38:12 crc kubenswrapper[4890]: I0121 15:38:12.172808 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:12 crc kubenswrapper[4890]: I0121 15:38:12.173143 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:12 crc kubenswrapper[4890]: I0121 15:38:12.179654 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" Jan 21 15:38:12 crc kubenswrapper[4890]: I0121 15:38:12.182940 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" Jan 21 15:38:12 crc kubenswrapper[4890]: I0121 15:38:12.197177 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d9d8d5cd6-2sbv2" podStartSLOduration=4.197153362 podStartE2EDuration="4.197153362s" podCreationTimestamp="2026-01-21 15:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:38:12.189720577 +0000 UTC m=+374.551163036" watchObservedRunningTime="2026-01-21 15:38:12.197153362 +0000 UTC m=+374.558595781" Jan 21 15:38:12 crc kubenswrapper[4890]: I0121 15:38:12.234566 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-567cc96bf-ps79l" podStartSLOduration=4.234543605 podStartE2EDuration="4.234543605s" podCreationTimestamp="2026-01-21 15:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:38:12.211031417 +0000 UTC m=+374.572473836" watchObservedRunningTime="2026-01-21 15:38:12.234543605 +0000 UTC m=+374.595986014" Jan 21 15:38:18 crc kubenswrapper[4890]: I0121 15:38:18.761770 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:18 crc kubenswrapper[4890]: I0121 15:38:18.762125 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.113692 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56k6w"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.114647 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-56k6w" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="registry-server" containerID="cri-o://7890aaaa7455c555fad8510c98ea107cefb911ebdaf04897c2e06dd35cfa40ee" gracePeriod=30 Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.126028 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vdj8x"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.141690 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7znlr"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.142179 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" containerName="marketplace-operator" containerID="cri-o://bc37bfac3bd57b848797a9acf4c6d19096fedfc07045ae48df6334c8aa17a50b" gracePeriod=30 Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.151413 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vt8l"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.152169 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5vt8l" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="registry-server" containerID="cri-o://38063aaa80cbff2685b86feccf87dada6bb2c6f34c03ce1dd5f30bd9cf9ca150" gracePeriod=30 Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.161017 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9pcl"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.163022 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9pcl" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="registry-server" containerID="cri-o://91ee9f94f13bd30b3a0d1b148970f2a6326bec387874c7cc179a315c8f225a9f" gracePeriod=30 Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.176431 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2g5nx"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.179991 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.181754 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2g5nx"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.290981 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vdj8x" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="registry-server" containerID="cri-o://6666b6dd6d54d6932c534123d2ca6870c59bd73fcfe3ec6f1b386369315ef945" gracePeriod=30 Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.329501 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e522c-d015-4945-9062-183e67bb8239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.329559 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5445h\" (UniqueName: \"kubernetes.io/projected/7b1e522c-d015-4945-9062-183e67bb8239-kube-api-access-5445h\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.329763 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1e522c-d015-4945-9062-183e67bb8239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.430507 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e522c-d015-4945-9062-183e67bb8239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.430545 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5445h\" (UniqueName: \"kubernetes.io/projected/7b1e522c-d015-4945-9062-183e67bb8239-kube-api-access-5445h\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.430603 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1e522c-d015-4945-9062-183e67bb8239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.432502 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e522c-d015-4945-9062-183e67bb8239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.439022 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1e522c-d015-4945-9062-183e67bb8239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.447978 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5445h\" (UniqueName: \"kubernetes.io/projected/7b1e522c-d015-4945-9062-183e67bb8239-kube-api-access-5445h\") pod \"marketplace-operator-79b997595-2g5nx\" (UID: \"7b1e522c-d015-4945-9062-183e67bb8239\") " pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.504215 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.804188 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bwbgs"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.806686 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.816602 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bwbgs"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.943493 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2g5nx"] Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952401 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952486 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-registry-certificates\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952520 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qp7t\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-kube-api-access-2qp7t\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952541 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-registry-tls\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952586 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952615 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-bound-sa-token\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952639 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-trusted-ca\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.952675 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:33 crc kubenswrapper[4890]: W0121 15:38:33.967789 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b1e522c_d015_4945_9062_183e67bb8239.slice/crio-8c6fb4b18e4c930f8f6574bd584406a7fbdd118ecd108b2d448fd73352e7e867 WatchSource:0}: Error finding container 8c6fb4b18e4c930f8f6574bd584406a7fbdd118ecd108b2d448fd73352e7e867: Status 404 returned error can't find the container with id 8c6fb4b18e4c930f8f6574bd584406a7fbdd118ecd108b2d448fd73352e7e867 Jan 21 15:38:33 crc kubenswrapper[4890]: I0121 15:38:33.984983 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.053759 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qp7t\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-kube-api-access-2qp7t\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.053802 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-registry-tls\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.053848 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.053880 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-bound-sa-token\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.053903 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-trusted-ca\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.053927 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.053963 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-registry-certificates\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.055463 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.055932 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-registry-certificates\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.056684 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-trusted-ca\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.070429 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.073688 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-registry-tls\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.073882 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qp7t\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-kube-api-access-2qp7t\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.078555 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08dbb144-d6eb-4e8b-8ce0-a3814089fbbb-bound-sa-token\") pod \"image-registry-66df7c8f76-bwbgs\" (UID: \"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.257193 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.325926 4890 generic.go:334] "Generic (PLEG): container finished" podID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerID="7890aaaa7455c555fad8510c98ea107cefb911ebdaf04897c2e06dd35cfa40ee" exitCode=0 Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.326015 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56k6w" event={"ID":"4fe216b5-31a0-4a3e-aa65-c35c43fb6073","Type":"ContainerDied","Data":"7890aaaa7455c555fad8510c98ea107cefb911ebdaf04897c2e06dd35cfa40ee"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.326052 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56k6w" event={"ID":"4fe216b5-31a0-4a3e-aa65-c35c43fb6073","Type":"ContainerDied","Data":"cc4c34d7a23e217b713b49be081df24b5f3ee6cd7f7e0b7d8810e0a02ed9a527"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.326066 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4c34d7a23e217b713b49be081df24b5f3ee6cd7f7e0b7d8810e0a02ed9a527" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.328915 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.330636 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" event={"ID":"7b1e522c-d015-4945-9062-183e67bb8239","Type":"ContainerStarted","Data":"eaa3385fa248c7646cd52d979d19122ca03fbbc674db3193cbcffced9ab97e94"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.330676 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" event={"ID":"7b1e522c-d015-4945-9062-183e67bb8239","Type":"ContainerStarted","Data":"8c6fb4b18e4c930f8f6574bd584406a7fbdd118ecd108b2d448fd73352e7e867"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.333464 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.334498 4890 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2g5nx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.334560 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" podUID="7b1e522c-d015-4945-9062-183e67bb8239" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.379871 4890 generic.go:334] "Generic (PLEG): container finished" podID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerID="6666b6dd6d54d6932c534123d2ca6870c59bd73fcfe3ec6f1b386369315ef945" exitCode=0 Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.380008 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdj8x" event={"ID":"37c92d1b-6b73-4c8f-b5f5-39062afd3003","Type":"ContainerDied","Data":"6666b6dd6d54d6932c534123d2ca6870c59bd73fcfe3ec6f1b386369315ef945"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.394878 4890 generic.go:334] "Generic (PLEG): container finished" podID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerID="91ee9f94f13bd30b3a0d1b148970f2a6326bec387874c7cc179a315c8f225a9f" exitCode=0 Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.394967 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pcl" event={"ID":"5fbee014-1292-47e2-b628-a2bf014b6f09","Type":"ContainerDied","Data":"91ee9f94f13bd30b3a0d1b148970f2a6326bec387874c7cc179a315c8f225a9f"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.413150 4890 generic.go:334] "Generic (PLEG): container finished" podID="d8be7071-7d2a-492a-b511-be4ff4650873" containerID="bc37bfac3bd57b848797a9acf4c6d19096fedfc07045ae48df6334c8aa17a50b" exitCode=0 Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.413259 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" event={"ID":"d8be7071-7d2a-492a-b511-be4ff4650873","Type":"ContainerDied","Data":"bc37bfac3bd57b848797a9acf4c6d19096fedfc07045ae48df6334c8aa17a50b"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.418178 4890 generic.go:334] "Generic (PLEG): container finished" podID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerID="38063aaa80cbff2685b86feccf87dada6bb2c6f34c03ce1dd5f30bd9cf9ca150" exitCode=0 Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.418226 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vt8l" event={"ID":"b30c6789-488c-4191-bbb2-24ff82f8c648","Type":"ContainerDied","Data":"38063aaa80cbff2685b86feccf87dada6bb2c6f34c03ce1dd5f30bd9cf9ca150"} Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.461166 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-catalog-content\") pod \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.461213 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47z6x\" (UniqueName: \"kubernetes.io/projected/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-kube-api-access-47z6x\") pod \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.461273 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-utilities\") pod \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\" (UID: \"4fe216b5-31a0-4a3e-aa65-c35c43fb6073\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.467957 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-utilities" (OuterVolumeSpecName: "utilities") pod "4fe216b5-31a0-4a3e-aa65-c35c43fb6073" (UID: "4fe216b5-31a0-4a3e-aa65-c35c43fb6073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.471547 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-kube-api-access-47z6x" (OuterVolumeSpecName: "kube-api-access-47z6x") pod "4fe216b5-31a0-4a3e-aa65-c35c43fb6073" (UID: "4fe216b5-31a0-4a3e-aa65-c35c43fb6073"). InnerVolumeSpecName "kube-api-access-47z6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.509719 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.514040 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.520687 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fe216b5-31a0-4a3e-aa65-c35c43fb6073" (UID: "4fe216b5-31a0-4a3e-aa65-c35c43fb6073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.525576 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.534768 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" podStartSLOduration=1.534747064 podStartE2EDuration="1.534747064s" podCreationTimestamp="2026-01-21 15:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:38:34.378649643 +0000 UTC m=+396.740092052" watchObservedRunningTime="2026-01-21 15:38:34.534747064 +0000 UTC m=+396.896189493" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.541041 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.562457 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.562485 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47z6x\" (UniqueName: \"kubernetes.io/projected/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-kube-api-access-47z6x\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.562495 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe216b5-31a0-4a3e-aa65-c35c43fb6073-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.663800 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv428\" (UniqueName: \"kubernetes.io/projected/b30c6789-488c-4191-bbb2-24ff82f8c648-kube-api-access-rv428\") pod \"b30c6789-488c-4191-bbb2-24ff82f8c648\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.663862 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-catalog-content\") pod \"5fbee014-1292-47e2-b628-a2bf014b6f09\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.663893 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-catalog-content\") pod \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.663917 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-utilities\") pod \"b30c6789-488c-4191-bbb2-24ff82f8c648\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.663943 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-trusted-ca\") pod \"d8be7071-7d2a-492a-b511-be4ff4650873\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.663964 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8gbl\" (UniqueName: \"kubernetes.io/projected/37c92d1b-6b73-4c8f-b5f5-39062afd3003-kube-api-access-c8gbl\") pod \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.664007 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-utilities\") pod \"5fbee014-1292-47e2-b628-a2bf014b6f09\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.664026 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-utilities\") pod \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\" (UID: \"37c92d1b-6b73-4c8f-b5f5-39062afd3003\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.664044 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgmkh\" (UniqueName: \"kubernetes.io/projected/5fbee014-1292-47e2-b628-a2bf014b6f09-kube-api-access-vgmkh\") pod \"5fbee014-1292-47e2-b628-a2bf014b6f09\" (UID: \"5fbee014-1292-47e2-b628-a2bf014b6f09\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.664070 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-operator-metrics\") pod \"d8be7071-7d2a-492a-b511-be4ff4650873\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.664086 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9tff\" (UniqueName: \"kubernetes.io/projected/d8be7071-7d2a-492a-b511-be4ff4650873-kube-api-access-g9tff\") pod \"d8be7071-7d2a-492a-b511-be4ff4650873\" (UID: \"d8be7071-7d2a-492a-b511-be4ff4650873\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.664101 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-catalog-content\") pod \"b30c6789-488c-4191-bbb2-24ff82f8c648\" (UID: \"b30c6789-488c-4191-bbb2-24ff82f8c648\") " Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.669043 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c92d1b-6b73-4c8f-b5f5-39062afd3003-kube-api-access-c8gbl" (OuterVolumeSpecName: "kube-api-access-c8gbl") pod "37c92d1b-6b73-4c8f-b5f5-39062afd3003" (UID: "37c92d1b-6b73-4c8f-b5f5-39062afd3003"). InnerVolumeSpecName "kube-api-access-c8gbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.670994 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b30c6789-488c-4191-bbb2-24ff82f8c648-kube-api-access-rv428" (OuterVolumeSpecName: "kube-api-access-rv428") pod "b30c6789-488c-4191-bbb2-24ff82f8c648" (UID: "b30c6789-488c-4191-bbb2-24ff82f8c648"). InnerVolumeSpecName "kube-api-access-rv428". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.673360 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-utilities" (OuterVolumeSpecName: "utilities") pod "b30c6789-488c-4191-bbb2-24ff82f8c648" (UID: "b30c6789-488c-4191-bbb2-24ff82f8c648"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.673563 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d8be7071-7d2a-492a-b511-be4ff4650873" (UID: "d8be7071-7d2a-492a-b511-be4ff4650873"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.674009 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-utilities" (OuterVolumeSpecName: "utilities") pod "5fbee014-1292-47e2-b628-a2bf014b6f09" (UID: "5fbee014-1292-47e2-b628-a2bf014b6f09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.674534 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-utilities" (OuterVolumeSpecName: "utilities") pod "37c92d1b-6b73-4c8f-b5f5-39062afd3003" (UID: "37c92d1b-6b73-4c8f-b5f5-39062afd3003"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.677017 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbee014-1292-47e2-b628-a2bf014b6f09-kube-api-access-vgmkh" (OuterVolumeSpecName: "kube-api-access-vgmkh") pod "5fbee014-1292-47e2-b628-a2bf014b6f09" (UID: "5fbee014-1292-47e2-b628-a2bf014b6f09"). InnerVolumeSpecName "kube-api-access-vgmkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.678095 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8be7071-7d2a-492a-b511-be4ff4650873-kube-api-access-g9tff" (OuterVolumeSpecName: "kube-api-access-g9tff") pod "d8be7071-7d2a-492a-b511-be4ff4650873" (UID: "d8be7071-7d2a-492a-b511-be4ff4650873"). InnerVolumeSpecName "kube-api-access-g9tff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.678582 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d8be7071-7d2a-492a-b511-be4ff4650873" (UID: "d8be7071-7d2a-492a-b511-be4ff4650873"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.687220 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b30c6789-488c-4191-bbb2-24ff82f8c648" (UID: "b30c6789-488c-4191-bbb2-24ff82f8c648"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.732091 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37c92d1b-6b73-4c8f-b5f5-39062afd3003" (UID: "37c92d1b-6b73-4c8f-b5f5-39062afd3003"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765373 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765452 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765461 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgmkh\" (UniqueName: \"kubernetes.io/projected/5fbee014-1292-47e2-b628-a2bf014b6f09-kube-api-access-vgmkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765475 4890 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765488 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9tff\" (UniqueName: \"kubernetes.io/projected/d8be7071-7d2a-492a-b511-be4ff4650873-kube-api-access-g9tff\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765497 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765506 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv428\" (UniqueName: \"kubernetes.io/projected/b30c6789-488c-4191-bbb2-24ff82f8c648-kube-api-access-rv428\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765515 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c92d1b-6b73-4c8f-b5f5-39062afd3003-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765525 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6789-488c-4191-bbb2-24ff82f8c648-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765534 4890 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8be7071-7d2a-492a-b511-be4ff4650873-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.765542 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8gbl\" (UniqueName: \"kubernetes.io/projected/37c92d1b-6b73-4c8f-b5f5-39062afd3003-kube-api-access-c8gbl\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.778639 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bwbgs"] Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.829465 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fbee014-1292-47e2-b628-a2bf014b6f09" (UID: "5fbee014-1292-47e2-b628-a2bf014b6f09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:34 crc kubenswrapper[4890]: I0121 15:38:34.868012 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbee014-1292-47e2-b628-a2bf014b6f09-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.427183 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vt8l" event={"ID":"b30c6789-488c-4191-bbb2-24ff82f8c648","Type":"ContainerDied","Data":"907bfe1ceb5187115dfffd02a7464bff6e890ea360115e79d5496bedd1317069"} Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.427653 4890 scope.go:117] "RemoveContainer" containerID="38063aaa80cbff2685b86feccf87dada6bb2c6f34c03ce1dd5f30bd9cf9ca150" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.427835 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vt8l" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.430776 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" event={"ID":"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb","Type":"ContainerStarted","Data":"0140d5a7d80f9e3cbac514468204e8f3a17e32b6900791714c446fd2eb70d540"} Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.430848 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" event={"ID":"08dbb144-d6eb-4e8b-8ce0-a3814089fbbb","Type":"ContainerStarted","Data":"e672fe8022c8bd34712231cde8fc54a9c03f4b2db66faa4d77ce098e2c33129e"} Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.431604 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.434170 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdj8x" event={"ID":"37c92d1b-6b73-4c8f-b5f5-39062afd3003","Type":"ContainerDied","Data":"495a63a37d23de1dea7b6125c4a64d4959d96333e231afbf80b231755d206ab6"} Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.434296 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdj8x" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.438453 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pcl" event={"ID":"5fbee014-1292-47e2-b628-a2bf014b6f09","Type":"ContainerDied","Data":"3f33f2108c766b31fdd17acf39160346f4b3f6bced0ab1ef08228c767a5068d3"} Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.438485 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pcl" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.444940 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56k6w" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.444994 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" event={"ID":"d8be7071-7d2a-492a-b511-be4ff4650873","Type":"ContainerDied","Data":"f47d90f20e184be4e03502233baa4c507be7858632c1d399f4acd0b55e1e6c2f"} Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.445206 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7znlr" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.448435 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2g5nx" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.456026 4890 scope.go:117] "RemoveContainer" containerID="64d1b7005df06540dc61b6adda032bec5b05bc1933dad8b6e77c816852c2c479" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.463921 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" podStartSLOduration=2.463893411 podStartE2EDuration="2.463893411s" podCreationTimestamp="2026-01-21 15:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:38:35.456313405 +0000 UTC m=+397.817755814" watchObservedRunningTime="2026-01-21 15:38:35.463893411 +0000 UTC m=+397.825335820" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.475483 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vt8l"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.495609 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vt8l"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.504692 4890 scope.go:117] "RemoveContainer" containerID="8cb24b9a05ae4193932cf36b8ba3b12d2a8f845215e73969dc9f61b65a41d525" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.526020 4890 scope.go:117] "RemoveContainer" containerID="6666b6dd6d54d6932c534123d2ca6870c59bd73fcfe3ec6f1b386369315ef945" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.552507 4890 scope.go:117] "RemoveContainer" containerID="749090133a998a9125312c009da85ab6f5008d548549e121db7a5a0077128466" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.552652 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9pcl"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.586566 4890 scope.go:117] "RemoveContainer" containerID="4afbc2a9fc3a1a5321b0e25e6790e2cbd2905b3fa9ce9af6b8c5a2ba78137728" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.591238 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9pcl"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.600567 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56k6w"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.606286 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-56k6w"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.610335 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vdj8x"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.613268 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vdj8x"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.613398 4890 scope.go:117] "RemoveContainer" containerID="91ee9f94f13bd30b3a0d1b148970f2a6326bec387874c7cc179a315c8f225a9f" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.615953 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7znlr"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.618889 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7znlr"] Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.629832 4890 scope.go:117] "RemoveContainer" containerID="5ff4ba00f37a5c1e5d4be77ee9f8586a8ea38230b9554b98b78b8ee50999f72f" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.651161 4890 scope.go:117] "RemoveContainer" containerID="f155c63cb979620569a6c5f88d03bd7b875c6594a80705ba38ab690ea4684796" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.665589 4890 scope.go:117] "RemoveContainer" containerID="bc37bfac3bd57b848797a9acf4c6d19096fedfc07045ae48df6334c8aa17a50b" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.922671 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" path="/var/lib/kubelet/pods/37c92d1b-6b73-4c8f-b5f5-39062afd3003/volumes" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.923586 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" path="/var/lib/kubelet/pods/4fe216b5-31a0-4a3e-aa65-c35c43fb6073/volumes" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.924338 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" path="/var/lib/kubelet/pods/5fbee014-1292-47e2-b628-a2bf014b6f09/volumes" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.925724 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" path="/var/lib/kubelet/pods/b30c6789-488c-4191-bbb2-24ff82f8c648/volumes" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.926522 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" path="/var/lib/kubelet/pods/d8be7071-7d2a-492a-b511-be4ff4650873/volumes" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986063 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9l4rd"] Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986361 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986376 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986389 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986395 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986404 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986411 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986419 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" containerName="marketplace-operator" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986425 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" containerName="marketplace-operator" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986432 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986438 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986447 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986453 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986459 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986464 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986475 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986481 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="extract-content" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986491 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986496 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986507 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986514 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986529 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986538 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="extract-utilities" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986546 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986554 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: E0121 15:38:35.986565 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986578 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986707 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8be7071-7d2a-492a-b511-be4ff4650873" containerName="marketplace-operator" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986720 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="b30c6789-488c-4191-bbb2-24ff82f8c648" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986734 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe216b5-31a0-4a3e-aa65-c35c43fb6073" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986748 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c92d1b-6b73-4c8f-b5f5-39062afd3003" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.986756 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbee014-1292-47e2-b628-a2bf014b6f09" containerName="registry-server" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.987559 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.994193 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:38:35 crc kubenswrapper[4890]: I0121 15:38:35.996537 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9l4rd"] Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.087228 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g695h\" (UniqueName: \"kubernetes.io/projected/43ba982c-8921-4fd0-96af-2522d1323265-kube-api-access-g695h\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.087274 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ba982c-8921-4fd0-96af-2522d1323265-catalog-content\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.087436 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ba982c-8921-4fd0-96af-2522d1323265-utilities\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.189123 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ba982c-8921-4fd0-96af-2522d1323265-catalog-content\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.189249 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ba982c-8921-4fd0-96af-2522d1323265-utilities\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.189336 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g695h\" (UniqueName: \"kubernetes.io/projected/43ba982c-8921-4fd0-96af-2522d1323265-kube-api-access-g695h\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.189798 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ba982c-8921-4fd0-96af-2522d1323265-catalog-content\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.189864 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ba982c-8921-4fd0-96af-2522d1323265-utilities\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.212865 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g695h\" (UniqueName: \"kubernetes.io/projected/43ba982c-8921-4fd0-96af-2522d1323265-kube-api-access-g695h\") pod \"redhat-marketplace-9l4rd\" (UID: \"43ba982c-8921-4fd0-96af-2522d1323265\") " pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.301750 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.703501 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9l4rd"] Jan 21 15:38:36 crc kubenswrapper[4890]: W0121 15:38:36.710400 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43ba982c_8921_4fd0_96af_2522d1323265.slice/crio-81cb591639c2d6ea0d06c5f21fc34c50d55ca39f859f840dca61febb968e2ca2 WatchSource:0}: Error finding container 81cb591639c2d6ea0d06c5f21fc34c50d55ca39f859f840dca61febb968e2ca2: Status 404 returned error can't find the container with id 81cb591639c2d6ea0d06c5f21fc34c50d55ca39f859f840dca61febb968e2ca2 Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.965498 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jnlgx"] Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.966421 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.970124 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:38:36 crc kubenswrapper[4890]: I0121 15:38:36.980624 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jnlgx"] Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.101948 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1341039e-0f34-43a0-8fc3-36a65ecb8505-catalog-content\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.102339 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n8t4\" (UniqueName: \"kubernetes.io/projected/1341039e-0f34-43a0-8fc3-36a65ecb8505-kube-api-access-5n8t4\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.102473 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1341039e-0f34-43a0-8fc3-36a65ecb8505-utilities\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.203437 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1341039e-0f34-43a0-8fc3-36a65ecb8505-utilities\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.203515 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1341039e-0f34-43a0-8fc3-36a65ecb8505-catalog-content\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.203555 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n8t4\" (UniqueName: \"kubernetes.io/projected/1341039e-0f34-43a0-8fc3-36a65ecb8505-kube-api-access-5n8t4\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.204022 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1341039e-0f34-43a0-8fc3-36a65ecb8505-utilities\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.204101 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1341039e-0f34-43a0-8fc3-36a65ecb8505-catalog-content\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.221717 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n8t4\" (UniqueName: \"kubernetes.io/projected/1341039e-0f34-43a0-8fc3-36a65ecb8505-kube-api-access-5n8t4\") pod \"redhat-operators-jnlgx\" (UID: \"1341039e-0f34-43a0-8fc3-36a65ecb8505\") " pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.287852 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.470787 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l4rd" event={"ID":"43ba982c-8921-4fd0-96af-2522d1323265","Type":"ContainerStarted","Data":"81cb591639c2d6ea0d06c5f21fc34c50d55ca39f859f840dca61febb968e2ca2"} Jan 21 15:38:37 crc kubenswrapper[4890]: I0121 15:38:37.737976 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jnlgx"] Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.371594 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7prm"] Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.374479 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.377842 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.379089 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7prm"] Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.477929 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnlgx" event={"ID":"1341039e-0f34-43a0-8fc3-36a65ecb8505","Type":"ContainerStarted","Data":"e77aec441175946c45c331ce952dde450e2b02b8dd89c46f8f6460783758d546"} Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.525532 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-catalog-content\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.525627 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vvp\" (UniqueName: \"kubernetes.io/projected/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-kube-api-access-67vvp\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.525681 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-utilities\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.626954 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-catalog-content\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.627051 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67vvp\" (UniqueName: \"kubernetes.io/projected/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-kube-api-access-67vvp\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.627091 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-utilities\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.627531 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-catalog-content\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.627562 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-utilities\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.646289 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67vvp\" (UniqueName: \"kubernetes.io/projected/eed3ba63-937e-4ef3-9eda-75221a7ee3c4-kube-api-access-67vvp\") pod \"community-operators-x7prm\" (UID: \"eed3ba63-937e-4ef3-9eda-75221a7ee3c4\") " pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:38 crc kubenswrapper[4890]: I0121 15:38:38.693848 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.095918 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7prm"] Jan 21 15:38:39 crc kubenswrapper[4890]: W0121 15:38:39.105719 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeed3ba63_937e_4ef3_9eda_75221a7ee3c4.slice/crio-817a6e54ef8d97d4914850a00a16790b5a629fe4a417cd09b04fac778fdf7637 WatchSource:0}: Error finding container 817a6e54ef8d97d4914850a00a16790b5a629fe4a417cd09b04fac778fdf7637: Status 404 returned error can't find the container with id 817a6e54ef8d97d4914850a00a16790b5a629fe4a417cd09b04fac778fdf7637 Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.360936 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84nbz"] Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.363433 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.367002 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.377151 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84nbz"] Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.437244 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-utilities\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.437293 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-catalog-content\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.437334 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ptg\" (UniqueName: \"kubernetes.io/projected/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-kube-api-access-t7ptg\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.484598 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7prm" event={"ID":"eed3ba63-937e-4ef3-9eda-75221a7ee3c4","Type":"ContainerStarted","Data":"817a6e54ef8d97d4914850a00a16790b5a629fe4a417cd09b04fac778fdf7637"} Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.538720 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-utilities\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.538769 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-catalog-content\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.538815 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7ptg\" (UniqueName: \"kubernetes.io/projected/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-kube-api-access-t7ptg\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.539541 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-utilities\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.539776 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-catalog-content\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.557373 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7ptg\" (UniqueName: \"kubernetes.io/projected/3d76bd25-e92c-4f05-bdb1-149b09a31d2f-kube-api-access-t7ptg\") pod \"certified-operators-84nbz\" (UID: \"3d76bd25-e92c-4f05-bdb1-149b09a31d2f\") " pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:39 crc kubenswrapper[4890]: I0121 15:38:39.690804 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:40 crc kubenswrapper[4890]: I0121 15:38:40.095049 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84nbz"] Jan 21 15:38:40 crc kubenswrapper[4890]: W0121 15:38:40.102555 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d76bd25_e92c_4f05_bdb1_149b09a31d2f.slice/crio-688e2454c3670900adda2a41599efbff6d2c2001319d7477044140eb503768d7 WatchSource:0}: Error finding container 688e2454c3670900adda2a41599efbff6d2c2001319d7477044140eb503768d7: Status 404 returned error can't find the container with id 688e2454c3670900adda2a41599efbff6d2c2001319d7477044140eb503768d7 Jan 21 15:38:40 crc kubenswrapper[4890]: I0121 15:38:40.491451 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84nbz" event={"ID":"3d76bd25-e92c-4f05-bdb1-149b09a31d2f","Type":"ContainerStarted","Data":"688e2454c3670900adda2a41599efbff6d2c2001319d7477044140eb503768d7"} Jan 21 15:38:41 crc kubenswrapper[4890]: I0121 15:38:41.504682 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l4rd" event={"ID":"43ba982c-8921-4fd0-96af-2522d1323265","Type":"ContainerStarted","Data":"26c13097d7a6b90dd00d7b4243ed7f659973900d30635ba69a2bf474a06efc10"} Jan 21 15:38:42 crc kubenswrapper[4890]: I0121 15:38:42.525798 4890 generic.go:334] "Generic (PLEG): container finished" podID="43ba982c-8921-4fd0-96af-2522d1323265" containerID="26c13097d7a6b90dd00d7b4243ed7f659973900d30635ba69a2bf474a06efc10" exitCode=0 Jan 21 15:38:42 crc kubenswrapper[4890]: I0121 15:38:42.525842 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l4rd" event={"ID":"43ba982c-8921-4fd0-96af-2522d1323265","Type":"ContainerDied","Data":"26c13097d7a6b90dd00d7b4243ed7f659973900d30635ba69a2bf474a06efc10"} Jan 21 15:38:43 crc kubenswrapper[4890]: I0121 15:38:43.531940 4890 generic.go:334] "Generic (PLEG): container finished" podID="1341039e-0f34-43a0-8fc3-36a65ecb8505" containerID="b76f64082ee970d2438efed6926342b36e74c5816460d692b686589f2b620e8d" exitCode=0 Jan 21 15:38:43 crc kubenswrapper[4890]: I0121 15:38:43.532316 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnlgx" event={"ID":"1341039e-0f34-43a0-8fc3-36a65ecb8505","Type":"ContainerDied","Data":"b76f64082ee970d2438efed6926342b36e74c5816460d692b686589f2b620e8d"} Jan 21 15:38:43 crc kubenswrapper[4890]: I0121 15:38:43.533977 4890 generic.go:334] "Generic (PLEG): container finished" podID="eed3ba63-937e-4ef3-9eda-75221a7ee3c4" containerID="a1ddd7d7f18a1fe528f52ebb012568dbf644f94d8ff23917b1c1d06db6eb4199" exitCode=0 Jan 21 15:38:43 crc kubenswrapper[4890]: I0121 15:38:43.534058 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7prm" event={"ID":"eed3ba63-937e-4ef3-9eda-75221a7ee3c4","Type":"ContainerDied","Data":"a1ddd7d7f18a1fe528f52ebb012568dbf644f94d8ff23917b1c1d06db6eb4199"} Jan 21 15:38:43 crc kubenswrapper[4890]: I0121 15:38:43.540968 4890 generic.go:334] "Generic (PLEG): container finished" podID="3d76bd25-e92c-4f05-bdb1-149b09a31d2f" containerID="4ecc4a3c7c9bc58e3d70f8363a754e3d2f552c208edaf8d0d02460046eaf4faa" exitCode=0 Jan 21 15:38:43 crc kubenswrapper[4890]: I0121 15:38:43.541064 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84nbz" event={"ID":"3d76bd25-e92c-4f05-bdb1-149b09a31d2f","Type":"ContainerDied","Data":"4ecc4a3c7c9bc58e3d70f8363a754e3d2f552c208edaf8d0d02460046eaf4faa"} Jan 21 15:38:44 crc kubenswrapper[4890]: I0121 15:38:44.563265 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7prm" event={"ID":"eed3ba63-937e-4ef3-9eda-75221a7ee3c4","Type":"ContainerStarted","Data":"4446625205e06d06d03b6cf7910d0bb25c6f5ef266b50598ec92dffdd983c7dc"} Jan 21 15:38:44 crc kubenswrapper[4890]: I0121 15:38:44.569374 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84nbz" event={"ID":"3d76bd25-e92c-4f05-bdb1-149b09a31d2f","Type":"ContainerStarted","Data":"fe10ac275e7f554dfc613db96677b704f3c2d124ceb26a5cd07b80e5b9066104"} Jan 21 15:38:45 crc kubenswrapper[4890]: I0121 15:38:45.578070 4890 generic.go:334] "Generic (PLEG): container finished" podID="43ba982c-8921-4fd0-96af-2522d1323265" containerID="d20f593c0950f80e42dfc98b398257258553445903e789ceff1af37f24a79aeb" exitCode=0 Jan 21 15:38:45 crc kubenswrapper[4890]: I0121 15:38:45.578438 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l4rd" event={"ID":"43ba982c-8921-4fd0-96af-2522d1323265","Type":"ContainerDied","Data":"d20f593c0950f80e42dfc98b398257258553445903e789ceff1af37f24a79aeb"} Jan 21 15:38:45 crc kubenswrapper[4890]: I0121 15:38:45.582490 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnlgx" event={"ID":"1341039e-0f34-43a0-8fc3-36a65ecb8505","Type":"ContainerStarted","Data":"08e194d1e073b9e34479950955379b5dd7c98ddc2db3a93960e80f846eda9e1b"} Jan 21 15:38:45 crc kubenswrapper[4890]: I0121 15:38:45.585107 4890 generic.go:334] "Generic (PLEG): container finished" podID="eed3ba63-937e-4ef3-9eda-75221a7ee3c4" containerID="4446625205e06d06d03b6cf7910d0bb25c6f5ef266b50598ec92dffdd983c7dc" exitCode=0 Jan 21 15:38:45 crc kubenswrapper[4890]: I0121 15:38:45.585181 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7prm" event={"ID":"eed3ba63-937e-4ef3-9eda-75221a7ee3c4","Type":"ContainerDied","Data":"4446625205e06d06d03b6cf7910d0bb25c6f5ef266b50598ec92dffdd983c7dc"} Jan 21 15:38:45 crc kubenswrapper[4890]: I0121 15:38:45.589092 4890 generic.go:334] "Generic (PLEG): container finished" podID="3d76bd25-e92c-4f05-bdb1-149b09a31d2f" containerID="fe10ac275e7f554dfc613db96677b704f3c2d124ceb26a5cd07b80e5b9066104" exitCode=0 Jan 21 15:38:45 crc kubenswrapper[4890]: I0121 15:38:45.589126 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84nbz" event={"ID":"3d76bd25-e92c-4f05-bdb1-149b09a31d2f","Type":"ContainerDied","Data":"fe10ac275e7f554dfc613db96677b704f3c2d124ceb26a5cd07b80e5b9066104"} Jan 21 15:38:46 crc kubenswrapper[4890]: I0121 15:38:46.602067 4890 generic.go:334] "Generic (PLEG): container finished" podID="1341039e-0f34-43a0-8fc3-36a65ecb8505" containerID="08e194d1e073b9e34479950955379b5dd7c98ddc2db3a93960e80f846eda9e1b" exitCode=0 Jan 21 15:38:46 crc kubenswrapper[4890]: I0121 15:38:46.602465 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnlgx" event={"ID":"1341039e-0f34-43a0-8fc3-36a65ecb8505","Type":"ContainerDied","Data":"08e194d1e073b9e34479950955379b5dd7c98ddc2db3a93960e80f846eda9e1b"} Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.611858 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84nbz" event={"ID":"3d76bd25-e92c-4f05-bdb1-149b09a31d2f","Type":"ContainerStarted","Data":"0fe93bef259e38c35e5a2bee926556c4fd5e076f70816300434d5481dcf4ae49"} Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.614566 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l4rd" event={"ID":"43ba982c-8921-4fd0-96af-2522d1323265","Type":"ContainerStarted","Data":"5fce53de6d809afbbb2885d3e317d899a7469263bca99ce4bff7eed744ac51a8"} Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.617178 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnlgx" event={"ID":"1341039e-0f34-43a0-8fc3-36a65ecb8505","Type":"ContainerStarted","Data":"55ad7bddace9295299f2f94beedcac303a56bdbbc44cfedaaeb8b1e8e27d8ac5"} Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.619779 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7prm" event={"ID":"eed3ba63-937e-4ef3-9eda-75221a7ee3c4","Type":"ContainerStarted","Data":"b422b0a46cccaf13bcbbe1547528786c363dff7d54964a840531ee1aebc8a760"} Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.637128 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84nbz" podStartSLOduration=6.010022773 podStartE2EDuration="8.637092437s" podCreationTimestamp="2026-01-21 15:38:39 +0000 UTC" firstStartedPulling="2026-01-21 15:38:43.545438265 +0000 UTC m=+405.906880674" lastFinishedPulling="2026-01-21 15:38:46.172507929 +0000 UTC m=+408.533950338" observedRunningTime="2026-01-21 15:38:47.636601725 +0000 UTC m=+409.998044134" watchObservedRunningTime="2026-01-21 15:38:47.637092437 +0000 UTC m=+409.998534836" Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.668098 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jnlgx" podStartSLOduration=8.006660748 podStartE2EDuration="11.668073698s" podCreationTimestamp="2026-01-21 15:38:36 +0000 UTC" firstStartedPulling="2026-01-21 15:38:43.533508324 +0000 UTC m=+405.894950733" lastFinishedPulling="2026-01-21 15:38:47.194921274 +0000 UTC m=+409.556363683" observedRunningTime="2026-01-21 15:38:47.663988477 +0000 UTC m=+410.025430906" watchObservedRunningTime="2026-01-21 15:38:47.668073698 +0000 UTC m=+410.029516107" Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.687220 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7prm" podStartSLOduration=7.109787303 podStartE2EDuration="9.687204157s" podCreationTimestamp="2026-01-21 15:38:38 +0000 UTC" firstStartedPulling="2026-01-21 15:38:43.536910547 +0000 UTC m=+405.898352956" lastFinishedPulling="2026-01-21 15:38:46.114327401 +0000 UTC m=+408.475769810" observedRunningTime="2026-01-21 15:38:47.687160166 +0000 UTC m=+410.048602575" watchObservedRunningTime="2026-01-21 15:38:47.687204157 +0000 UTC m=+410.048646556" Jan 21 15:38:47 crc kubenswrapper[4890]: I0121 15:38:47.711874 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9l4rd" podStartSLOduration=10.018691997 podStartE2EDuration="12.711839682s" podCreationTimestamp="2026-01-21 15:38:35 +0000 UTC" firstStartedPulling="2026-01-21 15:38:43.543668682 +0000 UTC m=+405.905111091" lastFinishedPulling="2026-01-21 15:38:46.236816347 +0000 UTC m=+408.598258776" observedRunningTime="2026-01-21 15:38:47.707595968 +0000 UTC m=+410.069038387" watchObservedRunningTime="2026-01-21 15:38:47.711839682 +0000 UTC m=+410.073282081" Jan 21 15:38:48 crc kubenswrapper[4890]: I0121 15:38:48.695008 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:48 crc kubenswrapper[4890]: I0121 15:38:48.695662 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:48 crc kubenswrapper[4890]: I0121 15:38:48.762083 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:48 crc kubenswrapper[4890]: I0121 15:38:48.762509 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:48 crc kubenswrapper[4890]: I0121 15:38:48.762674 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:38:48 crc kubenswrapper[4890]: I0121 15:38:48.763377 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67982b0233662b552433e8cc5e81f5a900b3f7fff6d2f2fc042695614d9cb5be"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:38:48 crc kubenswrapper[4890]: I0121 15:38:48.763521 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://67982b0233662b552433e8cc5e81f5a900b3f7fff6d2f2fc042695614d9cb5be" gracePeriod=600 Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.638753 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="67982b0233662b552433e8cc5e81f5a900b3f7fff6d2f2fc042695614d9cb5be" exitCode=0 Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.638830 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"67982b0233662b552433e8cc5e81f5a900b3f7fff6d2f2fc042695614d9cb5be"} Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.639098 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"6d13e88a44e40f057930b863b94e86e1c511eca15fbc8041b56d03f36ff8a4f1"} Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.639120 4890 scope.go:117] "RemoveContainer" containerID="b2643d64c6aecfa4381475d22ae487984ddf128eb77cff2c0cbbedb50b436731" Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.692339 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.692657 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.743981 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:49 crc kubenswrapper[4890]: I0121 15:38:49.748538 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x7prm" podUID="eed3ba63-937e-4ef3-9eda-75221a7ee3c4" containerName="registry-server" probeResult="failure" output=< Jan 21 15:38:49 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 15:38:49 crc kubenswrapper[4890]: > Jan 21 15:38:51 crc kubenswrapper[4890]: I0121 15:38:51.697478 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84nbz" Jan 21 15:38:54 crc kubenswrapper[4890]: I0121 15:38:54.263141 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-bwbgs" Jan 21 15:38:54 crc kubenswrapper[4890]: I0121 15:38:54.329774 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p86sf"] Jan 21 15:38:56 crc kubenswrapper[4890]: I0121 15:38:56.302475 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:56 crc kubenswrapper[4890]: I0121 15:38:56.302963 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:56 crc kubenswrapper[4890]: I0121 15:38:56.371782 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:56 crc kubenswrapper[4890]: I0121 15:38:56.741040 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9l4rd" Jan 21 15:38:57 crc kubenswrapper[4890]: I0121 15:38:57.288262 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:57 crc kubenswrapper[4890]: I0121 15:38:57.288331 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:57 crc kubenswrapper[4890]: I0121 15:38:57.327311 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:57 crc kubenswrapper[4890]: I0121 15:38:57.746133 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jnlgx" Jan 21 15:38:58 crc kubenswrapper[4890]: I0121 15:38:58.739905 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:38:58 crc kubenswrapper[4890]: I0121 15:38:58.780144 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7prm" Jan 21 15:39:19 crc kubenswrapper[4890]: I0121 15:39:19.375029 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" podUID="566f28b1-744d-4cd6-b60a-f139a071579d" containerName="registry" containerID="cri-o://c2c95160452d813b0d9dca79d98ea59b0103f10945985e8593e9f45d06fb8e78" gracePeriod=30 Jan 21 15:39:20 crc kubenswrapper[4890]: I0121 15:39:20.842514 4890 generic.go:334] "Generic (PLEG): container finished" podID="566f28b1-744d-4cd6-b60a-f139a071579d" containerID="c2c95160452d813b0d9dca79d98ea59b0103f10945985e8593e9f45d06fb8e78" exitCode=0 Jan 21 15:39:20 crc kubenswrapper[4890]: I0121 15:39:20.842664 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" event={"ID":"566f28b1-744d-4cd6-b60a-f139a071579d","Type":"ContainerDied","Data":"c2c95160452d813b0d9dca79d98ea59b0103f10945985e8593e9f45d06fb8e78"} Jan 21 15:39:20 crc kubenswrapper[4890]: I0121 15:39:20.988932 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085234 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/566f28b1-744d-4cd6-b60a-f139a071579d-ca-trust-extracted\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085387 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-bound-sa-token\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085415 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-registry-certificates\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085469 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-trusted-ca\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085498 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/566f28b1-744d-4cd6-b60a-f139a071579d-installation-pull-secrets\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085523 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxvv6\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-kube-api-access-vxvv6\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085659 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.085691 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-registry-tls\") pod \"566f28b1-744d-4cd6-b60a-f139a071579d\" (UID: \"566f28b1-744d-4cd6-b60a-f139a071579d\") " Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.086882 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.087337 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.092173 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.092777 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/566f28b1-744d-4cd6-b60a-f139a071579d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.092833 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-kube-api-access-vxvv6" (OuterVolumeSpecName: "kube-api-access-vxvv6") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "kube-api-access-vxvv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.098692 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.099613 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.107398 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/566f28b1-744d-4cd6-b60a-f139a071579d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "566f28b1-744d-4cd6-b60a-f139a071579d" (UID: "566f28b1-744d-4cd6-b60a-f139a071579d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.187126 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.187440 4890 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/566f28b1-744d-4cd6-b60a-f139a071579d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.187530 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxvv6\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-kube-api-access-vxvv6\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.187618 4890 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.187701 4890 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/566f28b1-744d-4cd6-b60a-f139a071579d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.187788 4890 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/566f28b1-744d-4cd6-b60a-f139a071579d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.187894 4890 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/566f28b1-744d-4cd6-b60a-f139a071579d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.849066 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" event={"ID":"566f28b1-744d-4cd6-b60a-f139a071579d","Type":"ContainerDied","Data":"ee2466e3f424aeaa029450274efcba061da7fde67c8d7e19e26d6bd45e64791c"} Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.849129 4890 scope.go:117] "RemoveContainer" containerID="c2c95160452d813b0d9dca79d98ea59b0103f10945985e8593e9f45d06fb8e78" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.849169 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.931152 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p86sf"] Jan 21 15:39:21 crc kubenswrapper[4890]: I0121 15:39:21.931223 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p86sf"] Jan 21 15:39:23 crc kubenswrapper[4890]: I0121 15:39:23.926114 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="566f28b1-744d-4cd6-b60a-f139a071579d" path="/var/lib/kubelet/pods/566f28b1-744d-4cd6-b60a-f139a071579d/volumes" Jan 21 15:39:25 crc kubenswrapper[4890]: I0121 15:39:25.801897 4890 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-p86sf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.14:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:39:25 crc kubenswrapper[4890]: I0121 15:39:25.801990 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-p86sf" podUID="566f28b1-744d-4cd6-b60a-f139a071579d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.14:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:40:58 crc kubenswrapper[4890]: I0121 15:40:58.158559 4890 scope.go:117] "RemoveContainer" containerID="4e266877516fb3ef845010209b5ee751bcb219cac0d05344c9ac4cfe4877b519" Jan 21 15:40:58 crc kubenswrapper[4890]: I0121 15:40:58.176495 4890 scope.go:117] "RemoveContainer" containerID="14b485b5bf7b581700de9d0d8dcddcddb135531ed4ce57f0c66eb1d64ce279b4" Jan 21 15:41:18 crc kubenswrapper[4890]: I0121 15:41:18.762934 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:41:18 crc kubenswrapper[4890]: I0121 15:41:18.763567 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:41:48 crc kubenswrapper[4890]: I0121 15:41:48.761724 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:41:48 crc kubenswrapper[4890]: I0121 15:41:48.762280 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:41:58 crc kubenswrapper[4890]: I0121 15:41:58.225204 4890 scope.go:117] "RemoveContainer" containerID="7890aaaa7455c555fad8510c98ea107cefb911ebdaf04897c2e06dd35cfa40ee" Jan 21 15:41:58 crc kubenswrapper[4890]: I0121 15:41:58.247186 4890 scope.go:117] "RemoveContainer" containerID="e5f5a081503d56e4e1604451fd9d63cd4f5537dd8b47332131e4c1d8c96ee49b" Jan 21 15:41:58 crc kubenswrapper[4890]: I0121 15:41:58.267334 4890 scope.go:117] "RemoveContainer" containerID="1d79ef7fbbd1c6046291a5187af6d63974ff7777e0f07684a3730eaf8268812e" Jan 21 15:41:58 crc kubenswrapper[4890]: I0121 15:41:58.286636 4890 scope.go:117] "RemoveContainer" containerID="5e7560c018dafbb419cdedfd636b9fcd40bace05b7dce41612a565a149aeff7e" Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.762306 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.762839 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.762892 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.763625 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d13e88a44e40f057930b863b94e86e1c511eca15fbc8041b56d03f36ff8a4f1"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.763683 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://6d13e88a44e40f057930b863b94e86e1c511eca15fbc8041b56d03f36ff8a4f1" gracePeriod=600 Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.904146 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="6d13e88a44e40f057930b863b94e86e1c511eca15fbc8041b56d03f36ff8a4f1" exitCode=0 Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.904224 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"6d13e88a44e40f057930b863b94e86e1c511eca15fbc8041b56d03f36ff8a4f1"} Jan 21 15:42:18 crc kubenswrapper[4890]: I0121 15:42:18.904601 4890 scope.go:117] "RemoveContainer" containerID="67982b0233662b552433e8cc5e81f5a900b3f7fff6d2f2fc042695614d9cb5be" Jan 21 15:42:19 crc kubenswrapper[4890]: I0121 15:42:19.912649 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"15c7eb35f58f393a9ceb7bc41b4e4e73eaeaf05b996fe213d725df9631b7a811"} Jan 21 15:44:31 crc kubenswrapper[4890]: I0121 15:44:31.518180 4890 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:44:48 crc kubenswrapper[4890]: I0121 15:44:48.762503 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:44:48 crc kubenswrapper[4890]: I0121 15:44:48.763215 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.615708 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rp8lm"] Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.616912 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-controller" containerID="cri-o://016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.616983 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.617011 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="nbdb" containerID="cri-o://eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.617078 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="northd" containerID="cri-o://e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.617086 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-node" containerID="cri-o://a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.617367 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="sbdb" containerID="cri-o://460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.617316 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-acl-logging" containerID="cri-o://35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.648010 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.648427 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.649254 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.650585 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.651191 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.651230 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="nbdb" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.652901 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" containerID="cri-o://bd3bfcadff93dd0ae59b2f2fe1e4993c6b7ab057555f7a7201932eb3cd4c60cb" gracePeriod=30 Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.656178 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 15:44:57 crc kubenswrapper[4890]: E0121 15:44:57.656208 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="sbdb" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.968697 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/2.log" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.969660 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/1.log" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.969690 4890 generic.go:334] "Generic (PLEG): container finished" podID="eba30f20-e5ad-4888-850d-1715115ab8bd" containerID="c5ab0cadc8ae9b2a5654460dcd503ca706de3d4bf65487b20e0f6393e55f00e6" exitCode=2 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.969735 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerDied","Data":"c5ab0cadc8ae9b2a5654460dcd503ca706de3d4bf65487b20e0f6393e55f00e6"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.969768 4890 scope.go:117] "RemoveContainer" containerID="68c546b96fb4e62cda5c7fb983e69ba4afe27d603b6921ada1e90ccd565c7c50" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.970205 4890 scope.go:117] "RemoveContainer" containerID="c5ab0cadc8ae9b2a5654460dcd503ca706de3d4bf65487b20e0f6393e55f00e6" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.972692 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovnkube-controller/3.log" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.976853 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovn-acl-logging/0.log" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977298 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovn-controller/0.log" Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977734 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="bd3bfcadff93dd0ae59b2f2fe1e4993c6b7ab057555f7a7201932eb3cd4c60cb" exitCode=0 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977757 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831" exitCode=0 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977764 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f" exitCode=0 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977772 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a" exitCode=0 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977778 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c" exitCode=0 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977784 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284" exitCode=0 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977790 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64" exitCode=143 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977796 4890 generic.go:334] "Generic (PLEG): container finished" podID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerID="016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2" exitCode=143 Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977831 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"bd3bfcadff93dd0ae59b2f2fe1e4993c6b7ab057555f7a7201932eb3cd4c60cb"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977873 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977892 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977905 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977918 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977933 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977946 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64"} Jan 21 15:44:57 crc kubenswrapper[4890]: I0121 15:44:57.977960 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2"} Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.006710 4890 scope.go:117] "RemoveContainer" containerID="a6b0d338a0faefe78ab8dd36b0920ea5faeceeaba01091de568a515cb2a1b5c8" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.299820 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovn-acl-logging/0.log" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.300840 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rp8lm_86d5dcae-8e63-4910-9a28-4f6a5b2d427f/ovn-controller/0.log" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.301794 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.355695 4890 scope.go:117] "RemoveContainer" containerID="acb5aa3d317a5aad264ebf97a71d8dbe75e1e0cc1515be83087394cc7f677f5d" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.389053 4890 scope.go:117] "RemoveContainer" containerID="35ba52b1529dd66d9571d98449c5e9e2f72689452be028db01c88efafdafad64" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390184 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vqff5"] Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390534 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566f28b1-744d-4cd6-b60a-f139a071579d" containerName="registry" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390564 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="566f28b1-744d-4cd6-b60a-f139a071579d" containerName="registry" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390583 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390595 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390616 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390690 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390705 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kubecfg-setup" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390717 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kubecfg-setup" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390731 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390743 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390761 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-acl-logging" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390772 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-acl-logging" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390787 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="nbdb" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390798 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="nbdb" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390813 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390824 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390835 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390846 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390860 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-node" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390871 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-node" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.390885 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="sbdb" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.390921 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="sbdb" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.391066 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.391080 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.391092 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="northd" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.391102 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="northd" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392215 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-acl-logging" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392238 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392252 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392267 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="northd" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392281 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="kube-rbac-proxy-node" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392294 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392306 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392316 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="566f28b1-744d-4cd6-b60a-f139a071579d" containerName="registry" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392379 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovn-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392393 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="nbdb" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392410 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392428 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="sbdb" Jan 21 15:44:58 crc kubenswrapper[4890]: E0121 15:44:58.392651 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392667 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.392803 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" containerName="ovnkube-controller" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.396790 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.428331 4890 scope.go:117] "RemoveContainer" containerID="bd3bfcadff93dd0ae59b2f2fe1e4993c6b7ab057555f7a7201932eb3cd4c60cb" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.445502 4890 scope.go:117] "RemoveContainer" containerID="460017b4b0a51735350980b76640a49e053725e77a97228a9c67f71f61b05831" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.451642 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-env-overrides\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.451683 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-bin\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.451736 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-ovn\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.451833 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-node-log\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.451868 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-node-log" (OuterVolumeSpecName: "node-log") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.451897 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.451916 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452075 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452208 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-slash\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452239 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-systemd-units\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452280 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-script-lib\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452303 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7pqd\" (UniqueName: \"kubernetes.io/projected/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-kube-api-access-v7pqd\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452306 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-slash" (OuterVolumeSpecName: "host-slash") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452319 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-var-lib-openvswitch\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452366 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-systemd\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452395 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-ovn-kubernetes\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452422 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-kubelet\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452446 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452530 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-netns\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452560 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-config\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452578 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-log-socket\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452595 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovn-node-metrics-cert\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452617 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-netd\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452633 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452635 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-openvswitch\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452656 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452665 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-etc-openvswitch\") pod \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\" (UID: \"86d5dcae-8e63-4910-9a28-4f6a5b2d427f\") " Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452751 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-ovnkube-config\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452787 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452818 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-var-lib-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452841 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-etc-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452867 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452887 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-cni-bin\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452904 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-ovn\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452923 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-run-ovn-kubernetes\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452958 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-slash\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452977 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-log-socket\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.452996 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-cni-netd\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453016 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-run-netns\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453038 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-systemd-units\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453058 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-env-overrides\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453079 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhvt\" (UniqueName: \"kubernetes.io/projected/f18934ba-30ef-4fca-afd3-7c6f53660378-kube-api-access-plhvt\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453102 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-kubelet\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453137 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-ovnkube-script-lib\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453166 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18934ba-30ef-4fca-afd3-7c6f53660378-ovn-node-metrics-cert\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453188 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-node-log\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453207 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-systemd\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453263 4890 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453279 4890 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453292 4890 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453303 4890 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453314 4890 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453326 4890 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453338 4890 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453379 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453405 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453443 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453471 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453509 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453550 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-log-socket" (OuterVolumeSpecName: "log-socket") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453592 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453624 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453676 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.453927 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.457855 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.458129 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-kube-api-access-v7pqd" (OuterVolumeSpecName: "kube-api-access-v7pqd") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "kube-api-access-v7pqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.460567 4890 scope.go:117] "RemoveContainer" containerID="a74f61b755543eeec0cd3ac3f5130f6ce91dc8127f056c567d34cb7367ca9284" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.465812 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "86d5dcae-8e63-4910-9a28-4f6a5b2d427f" (UID: "86d5dcae-8e63-4910-9a28-4f6a5b2d427f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.474441 4890 scope.go:117] "RemoveContainer" containerID="016f87a9f62d6efa402516e9232212904eaefcca98adeb9e7b111dbabd5b0ae2" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.491598 4890 scope.go:117] "RemoveContainer" containerID="0cccfecff3124ba053fd21b26db1f58d43caee3be8c4542aa842810d2eab2f1c" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.502107 4890 scope.go:117] "RemoveContainer" containerID="e82d35f6568a22fe2c5a3ded2eb5c6a8fed5e016bc3a1530b347f6ef933de15a" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.514259 4890 scope.go:117] "RemoveContainer" containerID="eff8ac21244a9f6494e50f8636266fc55c46d46f359f0c28f0d7d761b561af6f" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553708 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-kubelet\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553750 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-ovnkube-script-lib\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553773 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18934ba-30ef-4fca-afd3-7c6f53660378-ovn-node-metrics-cert\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553790 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-node-log\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553808 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-systemd\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553818 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-kubelet\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553835 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-ovnkube-config\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553862 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553886 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-var-lib-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553905 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-etc-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553924 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553941 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-cni-bin\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553957 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-ovn\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553952 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-node-log\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553989 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-run-ovn-kubernetes\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.553971 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-run-ovn-kubernetes\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554012 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554041 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-etc-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554040 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-var-lib-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554067 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-openvswitch\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554085 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-cni-bin\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554110 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-slash\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554116 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-systemd\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554127 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-log-socket\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554147 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-slash\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554155 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-cni-netd\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554169 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-log-socket\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554174 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-run-netns\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554130 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-run-ovn\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554200 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-cni-netd\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554205 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-systemd-units\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554222 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-env-overrides\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554238 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-systemd-units\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554240 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plhvt\" (UniqueName: \"kubernetes.io/projected/f18934ba-30ef-4fca-afd3-7c6f53660378-kube-api-access-plhvt\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554337 4890 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554222 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18934ba-30ef-4fca-afd3-7c6f53660378-host-run-netns\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554423 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7pqd\" (UniqueName: \"kubernetes.io/projected/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-kube-api-access-v7pqd\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554435 4890 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554443 4890 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554452 4890 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554461 4890 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554478 4890 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554486 4890 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554494 4890 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554503 4890 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554505 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-ovnkube-config\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554512 4890 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554557 4890 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554597 4890 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86d5dcae-8e63-4910-9a28-4f6a5b2d427f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554602 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-env-overrides\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.554646 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18934ba-30ef-4fca-afd3-7c6f53660378-ovnkube-script-lib\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.558120 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18934ba-30ef-4fca-afd3-7c6f53660378-ovn-node-metrics-cert\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.573285 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plhvt\" (UniqueName: \"kubernetes.io/projected/f18934ba-30ef-4fca-afd3-7c6f53660378-kube-api-access-plhvt\") pod \"ovnkube-node-vqff5\" (UID: \"f18934ba-30ef-4fca-afd3-7c6f53660378\") " pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.726517 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:44:58 crc kubenswrapper[4890]: W0121 15:44:58.746869 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf18934ba_30ef_4fca_afd3_7c6f53660378.slice/crio-80c0968b8ba8a3721d4bc3787107cbaf22538dfd73ed8aca57bba8bbc3c2dbe8 WatchSource:0}: Error finding container 80c0968b8ba8a3721d4bc3787107cbaf22538dfd73ed8aca57bba8bbc3c2dbe8: Status 404 returned error can't find the container with id 80c0968b8ba8a3721d4bc3787107cbaf22538dfd73ed8aca57bba8bbc3c2dbe8 Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.994036 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" event={"ID":"86d5dcae-8e63-4910-9a28-4f6a5b2d427f","Type":"ContainerDied","Data":"9ab9e65ad4d916ba05cd66018ce0fddbce8fa12078227840f8ccdc6ea8a3ede8"} Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.997890 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/2.log" Jan 21 15:44:58 crc kubenswrapper[4890]: I0121 15:44:58.997984 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pflt5" event={"ID":"eba30f20-e5ad-4888-850d-1715115ab8bd","Type":"ContainerStarted","Data":"8e8adcfb73927826855410afa862d458a39c60050a82ff49b52e9e32d7f55588"} Jan 21 15:44:59 crc kubenswrapper[4890]: I0121 15:44:59.000217 4890 generic.go:334] "Generic (PLEG): container finished" podID="f18934ba-30ef-4fca-afd3-7c6f53660378" containerID="8369994757a805ab4f77c6630bbb44ffba21a7052fb9b32bcb2be79d15084037" exitCode=0 Jan 21 15:44:59 crc kubenswrapper[4890]: I0121 15:44:59.000391 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rp8lm" Jan 21 15:44:59 crc kubenswrapper[4890]: I0121 15:44:59.001553 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerDied","Data":"8369994757a805ab4f77c6630bbb44ffba21a7052fb9b32bcb2be79d15084037"} Jan 21 15:44:59 crc kubenswrapper[4890]: I0121 15:44:59.001618 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"80c0968b8ba8a3721d4bc3787107cbaf22538dfd73ed8aca57bba8bbc3c2dbe8"} Jan 21 15:44:59 crc kubenswrapper[4890]: I0121 15:44:59.093517 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rp8lm"] Jan 21 15:44:59 crc kubenswrapper[4890]: I0121 15:44:59.099187 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rp8lm"] Jan 21 15:44:59 crc kubenswrapper[4890]: I0121 15:44:59.921963 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d5dcae-8e63-4910-9a28-4f6a5b2d427f" path="/var/lib/kubelet/pods/86d5dcae-8e63-4910-9a28-4f6a5b2d427f/volumes" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.010616 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"f82152ddde3dc841471f08769c8576cba4baf3d426fb1249fc18cd7cb18791c8"} Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.010987 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"0565df0e9c386d4e19c3f4ff0c16f9f9f112ea484a9a4d97b566a8abec0c907a"} Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.011002 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"2fe8f879f73038be6edea7397d16062cc302a3d86833ca06d460fb785f28485a"} Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.011014 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"ea2b122bfe87a05f8815da4c5e06f7d88e937a1889df0a9440f0341c9a5c1616"} Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.011025 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"16f5834963f9da8263eabbef435033e3298ac212fe8fa0df417f649fec20b8b5"} Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.157401 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd"] Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.158284 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.160053 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.160532 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.175646 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwcx\" (UniqueName: \"kubernetes.io/projected/25fa99d1-fd28-4795-ae17-06728e1cf697-kube-api-access-xvwcx\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.175749 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25fa99d1-fd28-4795-ae17-06728e1cf697-config-volume\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.175775 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25fa99d1-fd28-4795-ae17-06728e1cf697-secret-volume\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.277281 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25fa99d1-fd28-4795-ae17-06728e1cf697-config-volume\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.277413 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25fa99d1-fd28-4795-ae17-06728e1cf697-secret-volume\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.277479 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwcx\" (UniqueName: \"kubernetes.io/projected/25fa99d1-fd28-4795-ae17-06728e1cf697-kube-api-access-xvwcx\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.278129 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25fa99d1-fd28-4795-ae17-06728e1cf697-config-volume\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.283715 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25fa99d1-fd28-4795-ae17-06728e1cf697-secret-volume\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.301089 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwcx\" (UniqueName: \"kubernetes.io/projected/25fa99d1-fd28-4795-ae17-06728e1cf697-kube-api-access-xvwcx\") pod \"collect-profiles-29483505-wfgrd\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: I0121 15:45:00.473123 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: E0121 15:45:00.500739 4890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(7e62392e94387da8671127fa7165a1fa834c5356d271d19847154f82decdd8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:45:00 crc kubenswrapper[4890]: E0121 15:45:00.500881 4890 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(7e62392e94387da8671127fa7165a1fa834c5356d271d19847154f82decdd8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: E0121 15:45:00.500933 4890 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(7e62392e94387da8671127fa7165a1fa834c5356d271d19847154f82decdd8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:00 crc kubenswrapper[4890]: E0121 15:45:00.501039 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager(25fa99d1-fd28-4795-ae17-06728e1cf697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager(25fa99d1-fd28-4795-ae17-06728e1cf697)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(7e62392e94387da8671127fa7165a1fa834c5356d271d19847154f82decdd8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" podUID="25fa99d1-fd28-4795-ae17-06728e1cf697" Jan 21 15:45:01 crc kubenswrapper[4890]: I0121 15:45:01.021162 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"ea7081154d0d2d421d77bb448c2eb36da93c2e4f531deb89ffeab203c2ec4cf2"} Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.035687 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"f5a612b7967b4bcccb2f17c959bdd1a25948a009bc25a0c4c78f731bc8165aa2"} Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.765724 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-wqd2s"] Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.766770 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.769980 4890 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-l5sgg" Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.771449 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.771505 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.774038 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.929205 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-crc-storage\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.929275 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmq6h\" (UniqueName: \"kubernetes.io/projected/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-kube-api-access-dmq6h\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:03 crc kubenswrapper[4890]: I0121 15:45:03.929345 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-node-mnt\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: I0121 15:45:04.030746 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-crc-storage\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: I0121 15:45:04.030811 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmq6h\" (UniqueName: \"kubernetes.io/projected/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-kube-api-access-dmq6h\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: I0121 15:45:04.030850 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-node-mnt\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: I0121 15:45:04.031224 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-node-mnt\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: I0121 15:45:04.032587 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-crc-storage\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: I0121 15:45:04.064840 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmq6h\" (UniqueName: \"kubernetes.io/projected/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-kube-api-access-dmq6h\") pod \"crc-storage-crc-wqd2s\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: I0121 15:45:04.084616 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: E0121 15:45:04.119220 4890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(3cc0f0189b771c262db9112b193bd749cc286610d2da208596e12d157823ded6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:45:04 crc kubenswrapper[4890]: E0121 15:45:04.119550 4890 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(3cc0f0189b771c262db9112b193bd749cc286610d2da208596e12d157823ded6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: E0121 15:45:04.119823 4890 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(3cc0f0189b771c262db9112b193bd749cc286610d2da208596e12d157823ded6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:04 crc kubenswrapper[4890]: E0121 15:45:04.120041 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-wqd2s_crc-storage(9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-wqd2s_crc-storage(9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(3cc0f0189b771c262db9112b193bd749cc286610d2da208596e12d157823ded6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-wqd2s" podUID="9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.011082 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-wqd2s"] Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.011644 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.012201 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.014954 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd"] Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.015088 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.015491 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.051447 4890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(76bd8ebca679b7033b0b3883ec1fc72465cfa25980daa034e2bf2ac665bfe388): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.051570 4890 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(76bd8ebca679b7033b0b3883ec1fc72465cfa25980daa034e2bf2ac665bfe388): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.051606 4890 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(76bd8ebca679b7033b0b3883ec1fc72465cfa25980daa034e2bf2ac665bfe388): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.051689 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-wqd2s_crc-storage(9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-wqd2s_crc-storage(9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wqd2s_crc-storage_9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb_0(76bd8ebca679b7033b0b3883ec1fc72465cfa25980daa034e2bf2ac665bfe388): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-wqd2s" podUID="9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.056709 4890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(fad068bfd3115853aeb489792a11e9253bb3b52a557d6618d566f42ed01015c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.056772 4890 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(fad068bfd3115853aeb489792a11e9253bb3b52a557d6618d566f42ed01015c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.056792 4890 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(fad068bfd3115853aeb489792a11e9253bb3b52a557d6618d566f42ed01015c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:07 crc kubenswrapper[4890]: E0121 15:45:07.056837 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager(25fa99d1-fd28-4795-ae17-06728e1cf697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager(25fa99d1-fd28-4795-ae17-06728e1cf697)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483505-wfgrd_openshift-operator-lifecycle-manager_25fa99d1-fd28-4795-ae17-06728e1cf697_0(fad068bfd3115853aeb489792a11e9253bb3b52a557d6618d566f42ed01015c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" podUID="25fa99d1-fd28-4795-ae17-06728e1cf697" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.066465 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" event={"ID":"f18934ba-30ef-4fca-afd3-7c6f53660378","Type":"ContainerStarted","Data":"8002d4184c9aae81000bd2cc4adbf7fa08b0713d60ec33cff099b0428a2d239a"} Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.066808 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.066850 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.103601 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" podStartSLOduration=9.103585407 podStartE2EDuration="9.103585407s" podCreationTimestamp="2026-01-21 15:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:45:07.096798481 +0000 UTC m=+789.458240910" watchObservedRunningTime="2026-01-21 15:45:07.103585407 +0000 UTC m=+789.465027816" Jan 21 15:45:07 crc kubenswrapper[4890]: I0121 15:45:07.104438 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:45:08 crc kubenswrapper[4890]: I0121 15:45:08.071983 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:45:08 crc kubenswrapper[4890]: I0121 15:45:08.098625 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:45:17 crc kubenswrapper[4890]: I0121 15:45:17.913750 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:17 crc kubenswrapper[4890]: I0121 15:45:17.916946 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:18 crc kubenswrapper[4890]: I0121 15:45:18.120620 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-wqd2s"] Jan 21 15:45:18 crc kubenswrapper[4890]: I0121 15:45:18.129822 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:45:18 crc kubenswrapper[4890]: I0121 15:45:18.141310 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-wqd2s" event={"ID":"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb","Type":"ContainerStarted","Data":"f58d7b4fe758f02d49608e0200afb240894e1e4088455d4d8152962a43130725"} Jan 21 15:45:18 crc kubenswrapper[4890]: I0121 15:45:18.762198 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:18 crc kubenswrapper[4890]: I0121 15:45:18.762255 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:20 crc kubenswrapper[4890]: I0121 15:45:20.153596 4890 generic.go:334] "Generic (PLEG): container finished" podID="9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" containerID="75735260101bba2496b0774c759dda80b93f8a9005252b3a5e675edf3c31947d" exitCode=0 Jan 21 15:45:20 crc kubenswrapper[4890]: I0121 15:45:20.153663 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-wqd2s" event={"ID":"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb","Type":"ContainerDied","Data":"75735260101bba2496b0774c759dda80b93f8a9005252b3a5e675edf3c31947d"} Jan 21 15:45:20 crc kubenswrapper[4890]: I0121 15:45:20.913515 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:20 crc kubenswrapper[4890]: I0121 15:45:20.913953 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.100499 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd"] Jan 21 15:45:21 crc kubenswrapper[4890]: W0121 15:45:21.105589 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25fa99d1_fd28_4795_ae17_06728e1cf697.slice/crio-782351eeeba94248c4ec94282ee8547aa291af5aab2371a4cca4197355f52470 WatchSource:0}: Error finding container 782351eeeba94248c4ec94282ee8547aa291af5aab2371a4cca4197355f52470: Status 404 returned error can't find the container with id 782351eeeba94248c4ec94282ee8547aa291af5aab2371a4cca4197355f52470 Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.169002 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" event={"ID":"25fa99d1-fd28-4795-ae17-06728e1cf697","Type":"ContainerStarted","Data":"782351eeeba94248c4ec94282ee8547aa291af5aab2371a4cca4197355f52470"} Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.372995 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.554837 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-node-mnt\") pod \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.554999 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-crc-storage\") pod \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.555000 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" (UID: "9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.555120 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmq6h\" (UniqueName: \"kubernetes.io/projected/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-kube-api-access-dmq6h\") pod \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\" (UID: \"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb\") " Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.555465 4890 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.562159 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-kube-api-access-dmq6h" (OuterVolumeSpecName: "kube-api-access-dmq6h") pod "9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" (UID: "9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb"). InnerVolumeSpecName "kube-api-access-dmq6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.567934 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" (UID: "9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.657011 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmq6h\" (UniqueName: \"kubernetes.io/projected/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-kube-api-access-dmq6h\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:21 crc kubenswrapper[4890]: I0121 15:45:21.657056 4890 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:22 crc kubenswrapper[4890]: I0121 15:45:22.182520 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wqd2s" Jan 21 15:45:22 crc kubenswrapper[4890]: I0121 15:45:22.182500 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-wqd2s" event={"ID":"9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb","Type":"ContainerDied","Data":"f58d7b4fe758f02d49608e0200afb240894e1e4088455d4d8152962a43130725"} Jan 21 15:45:22 crc kubenswrapper[4890]: I0121 15:45:22.182742 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f58d7b4fe758f02d49608e0200afb240894e1e4088455d4d8152962a43130725" Jan 21 15:45:22 crc kubenswrapper[4890]: I0121 15:45:22.185676 4890 generic.go:334] "Generic (PLEG): container finished" podID="25fa99d1-fd28-4795-ae17-06728e1cf697" containerID="80253fc5641dbee354b7744e857ad42a0dc7b0bb8eec90ec632c1ec3a250170b" exitCode=0 Jan 21 15:45:22 crc kubenswrapper[4890]: I0121 15:45:22.185734 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" event={"ID":"25fa99d1-fd28-4795-ae17-06728e1cf697","Type":"ContainerDied","Data":"80253fc5641dbee354b7744e857ad42a0dc7b0bb8eec90ec632c1ec3a250170b"} Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.381629 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.485147 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25fa99d1-fd28-4795-ae17-06728e1cf697-config-volume\") pod \"25fa99d1-fd28-4795-ae17-06728e1cf697\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.485676 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvwcx\" (UniqueName: \"kubernetes.io/projected/25fa99d1-fd28-4795-ae17-06728e1cf697-kube-api-access-xvwcx\") pod \"25fa99d1-fd28-4795-ae17-06728e1cf697\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.485933 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25fa99d1-fd28-4795-ae17-06728e1cf697-secret-volume\") pod \"25fa99d1-fd28-4795-ae17-06728e1cf697\" (UID: \"25fa99d1-fd28-4795-ae17-06728e1cf697\") " Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.486089 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa99d1-fd28-4795-ae17-06728e1cf697-config-volume" (OuterVolumeSpecName: "config-volume") pod "25fa99d1-fd28-4795-ae17-06728e1cf697" (UID: "25fa99d1-fd28-4795-ae17-06728e1cf697"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.486457 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25fa99d1-fd28-4795-ae17-06728e1cf697-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.490447 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fa99d1-fd28-4795-ae17-06728e1cf697-kube-api-access-xvwcx" (OuterVolumeSpecName: "kube-api-access-xvwcx") pod "25fa99d1-fd28-4795-ae17-06728e1cf697" (UID: "25fa99d1-fd28-4795-ae17-06728e1cf697"). InnerVolumeSpecName "kube-api-access-xvwcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.491512 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fa99d1-fd28-4795-ae17-06728e1cf697-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25fa99d1-fd28-4795-ae17-06728e1cf697" (UID: "25fa99d1-fd28-4795-ae17-06728e1cf697"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.587730 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvwcx\" (UniqueName: \"kubernetes.io/projected/25fa99d1-fd28-4795-ae17-06728e1cf697-kube-api-access-xvwcx\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:23 crc kubenswrapper[4890]: I0121 15:45:23.587775 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25fa99d1-fd28-4795-ae17-06728e1cf697-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:24 crc kubenswrapper[4890]: I0121 15:45:24.198838 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" event={"ID":"25fa99d1-fd28-4795-ae17-06728e1cf697","Type":"ContainerDied","Data":"782351eeeba94248c4ec94282ee8547aa291af5aab2371a4cca4197355f52470"} Jan 21 15:45:24 crc kubenswrapper[4890]: I0121 15:45:24.198883 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782351eeeba94248c4ec94282ee8547aa291af5aab2371a4cca4197355f52470" Jan 21 15:45:24 crc kubenswrapper[4890]: I0121 15:45:24.198887 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.759403 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vqff5" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.914870 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2"] Jan 21 15:45:28 crc kubenswrapper[4890]: E0121 15:45:28.915326 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" containerName="storage" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.915423 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" containerName="storage" Jan 21 15:45:28 crc kubenswrapper[4890]: E0121 15:45:28.915484 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fa99d1-fd28-4795-ae17-06728e1cf697" containerName="collect-profiles" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.915531 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fa99d1-fd28-4795-ae17-06728e1cf697" containerName="collect-profiles" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.915704 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fa99d1-fd28-4795-ae17-06728e1cf697" containerName="collect-profiles" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.915786 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" containerName="storage" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.916704 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.918692 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.930652 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2"] Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.978092 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgzdc\" (UniqueName: \"kubernetes.io/projected/28f42f06-2b26-4f38-9fb3-653acad943d2-kube-api-access-pgzdc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.978158 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:28 crc kubenswrapper[4890]: I0121 15:45:28.978222 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.078959 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.079026 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgzdc\" (UniqueName: \"kubernetes.io/projected/28f42f06-2b26-4f38-9fb3-653acad943d2-kube-api-access-pgzdc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.079076 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.079563 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.079594 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.104815 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgzdc\" (UniqueName: \"kubernetes.io/projected/28f42f06-2b26-4f38-9fb3-653acad943d2-kube-api-access-pgzdc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.230812 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:29 crc kubenswrapper[4890]: I0121 15:45:29.433700 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2"] Jan 21 15:45:30 crc kubenswrapper[4890]: I0121 15:45:30.235432 4890 generic.go:334] "Generic (PLEG): container finished" podID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerID="52cda9d93e3b3deac20388b830d827c639dcecc29abeb666ec0508441d4ceda7" exitCode=0 Jan 21 15:45:30 crc kubenswrapper[4890]: I0121 15:45:30.235481 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" event={"ID":"28f42f06-2b26-4f38-9fb3-653acad943d2","Type":"ContainerDied","Data":"52cda9d93e3b3deac20388b830d827c639dcecc29abeb666ec0508441d4ceda7"} Jan 21 15:45:30 crc kubenswrapper[4890]: I0121 15:45:30.235509 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" event={"ID":"28f42f06-2b26-4f38-9fb3-653acad943d2","Type":"ContainerStarted","Data":"28b1add89d7e42490c6bf7d7a3cc8643e9dcca38f48c29c27543cad9896708dc"} Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.203503 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sxbw5"] Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.205052 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.214011 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxbw5"] Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.407475 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-utilities\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.407513 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-catalog-content\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.407559 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grtb9\" (UniqueName: \"kubernetes.io/projected/0119d510-0122-4f7d-afde-efee6a7e7753-kube-api-access-grtb9\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.509027 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grtb9\" (UniqueName: \"kubernetes.io/projected/0119d510-0122-4f7d-afde-efee6a7e7753-kube-api-access-grtb9\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.509153 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-utilities\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.509182 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-catalog-content\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.509748 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-utilities\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.509803 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-catalog-content\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.529335 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grtb9\" (UniqueName: \"kubernetes.io/projected/0119d510-0122-4f7d-afde-efee6a7e7753-kube-api-access-grtb9\") pod \"redhat-operators-sxbw5\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:31 crc kubenswrapper[4890]: I0121 15:45:31.824406 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:32 crc kubenswrapper[4890]: I0121 15:45:32.231533 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxbw5"] Jan 21 15:45:32 crc kubenswrapper[4890]: I0121 15:45:32.248765 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbw5" event={"ID":"0119d510-0122-4f7d-afde-efee6a7e7753","Type":"ContainerStarted","Data":"4825dd3a5430f5c7984dc665b7d9a49b9ff399d5bd5a621fab6bb66a1d35c363"} Jan 21 15:45:32 crc kubenswrapper[4890]: I0121 15:45:32.250372 4890 generic.go:334] "Generic (PLEG): container finished" podID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerID="f4a6f0b374f413bf7b3f447512d2a9ab9558d3f205ee1055f483dec7b1ffdd47" exitCode=0 Jan 21 15:45:32 crc kubenswrapper[4890]: I0121 15:45:32.250407 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" event={"ID":"28f42f06-2b26-4f38-9fb3-653acad943d2","Type":"ContainerDied","Data":"f4a6f0b374f413bf7b3f447512d2a9ab9558d3f205ee1055f483dec7b1ffdd47"} Jan 21 15:45:33 crc kubenswrapper[4890]: I0121 15:45:33.257950 4890 generic.go:334] "Generic (PLEG): container finished" podID="0119d510-0122-4f7d-afde-efee6a7e7753" containerID="3c7d0df7425dd4eea415f86d96e08361115411ce5f22228e131ef44971aecb1d" exitCode=0 Jan 21 15:45:33 crc kubenswrapper[4890]: I0121 15:45:33.258044 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbw5" event={"ID":"0119d510-0122-4f7d-afde-efee6a7e7753","Type":"ContainerDied","Data":"3c7d0df7425dd4eea415f86d96e08361115411ce5f22228e131ef44971aecb1d"} Jan 21 15:45:33 crc kubenswrapper[4890]: I0121 15:45:33.263372 4890 generic.go:334] "Generic (PLEG): container finished" podID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerID="200e28a4d573a3bb04e39e418047502ddc387ccd5dfa25d59fc1627b49a93d49" exitCode=0 Jan 21 15:45:33 crc kubenswrapper[4890]: I0121 15:45:33.263407 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" event={"ID":"28f42f06-2b26-4f38-9fb3-653acad943d2","Type":"ContainerDied","Data":"200e28a4d573a3bb04e39e418047502ddc387ccd5dfa25d59fc1627b49a93d49"} Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.501004 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.645269 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgzdc\" (UniqueName: \"kubernetes.io/projected/28f42f06-2b26-4f38-9fb3-653acad943d2-kube-api-access-pgzdc\") pod \"28f42f06-2b26-4f38-9fb3-653acad943d2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.645431 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-bundle\") pod \"28f42f06-2b26-4f38-9fb3-653acad943d2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.645490 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-util\") pod \"28f42f06-2b26-4f38-9fb3-653acad943d2\" (UID: \"28f42f06-2b26-4f38-9fb3-653acad943d2\") " Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.646319 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-bundle" (OuterVolumeSpecName: "bundle") pod "28f42f06-2b26-4f38-9fb3-653acad943d2" (UID: "28f42f06-2b26-4f38-9fb3-653acad943d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.652611 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f42f06-2b26-4f38-9fb3-653acad943d2-kube-api-access-pgzdc" (OuterVolumeSpecName: "kube-api-access-pgzdc") pod "28f42f06-2b26-4f38-9fb3-653acad943d2" (UID: "28f42f06-2b26-4f38-9fb3-653acad943d2"). InnerVolumeSpecName "kube-api-access-pgzdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.670611 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-util" (OuterVolumeSpecName: "util") pod "28f42f06-2b26-4f38-9fb3-653acad943d2" (UID: "28f42f06-2b26-4f38-9fb3-653acad943d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.746620 4890 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.746654 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgzdc\" (UniqueName: \"kubernetes.io/projected/28f42f06-2b26-4f38-9fb3-653acad943d2-kube-api-access-pgzdc\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:34 crc kubenswrapper[4890]: I0121 15:45:34.746664 4890 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28f42f06-2b26-4f38-9fb3-653acad943d2-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:35 crc kubenswrapper[4890]: I0121 15:45:35.278134 4890 generic.go:334] "Generic (PLEG): container finished" podID="0119d510-0122-4f7d-afde-efee6a7e7753" containerID="d8c0bdd3784377b43b4a62c07d1cf097d19118d9890782d4d53db414d949abdc" exitCode=0 Jan 21 15:45:35 crc kubenswrapper[4890]: I0121 15:45:35.278268 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbw5" event={"ID":"0119d510-0122-4f7d-afde-efee6a7e7753","Type":"ContainerDied","Data":"d8c0bdd3784377b43b4a62c07d1cf097d19118d9890782d4d53db414d949abdc"} Jan 21 15:45:35 crc kubenswrapper[4890]: I0121 15:45:35.282076 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" event={"ID":"28f42f06-2b26-4f38-9fb3-653acad943d2","Type":"ContainerDied","Data":"28b1add89d7e42490c6bf7d7a3cc8643e9dcca38f48c29c27543cad9896708dc"} Jan 21 15:45:35 crc kubenswrapper[4890]: I0121 15:45:35.282141 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b1add89d7e42490c6bf7d7a3cc8643e9dcca38f48c29c27543cad9896708dc" Jan 21 15:45:35 crc kubenswrapper[4890]: I0121 15:45:35.282203 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2" Jan 21 15:45:36 crc kubenswrapper[4890]: I0121 15:45:36.295568 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbw5" event={"ID":"0119d510-0122-4f7d-afde-efee6a7e7753","Type":"ContainerStarted","Data":"532e9f63cc8ebfc3853fe1ac2c985728be12b0b8a98cee28006c5a57ca81999a"} Jan 21 15:45:36 crc kubenswrapper[4890]: I0121 15:45:36.329707 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sxbw5" podStartSLOduration=2.886185118 podStartE2EDuration="5.32968813s" podCreationTimestamp="2026-01-21 15:45:31 +0000 UTC" firstStartedPulling="2026-01-21 15:45:33.25994727 +0000 UTC m=+815.621389679" lastFinishedPulling="2026-01-21 15:45:35.703450282 +0000 UTC m=+818.064892691" observedRunningTime="2026-01-21 15:45:36.328621523 +0000 UTC m=+818.690063932" watchObservedRunningTime="2026-01-21 15:45:36.32968813 +0000 UTC m=+818.691130539" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.761300 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-n8zvx"] Jan 21 15:45:39 crc kubenswrapper[4890]: E0121 15:45:39.763103 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerName="util" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.763197 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerName="util" Jan 21 15:45:39 crc kubenswrapper[4890]: E0121 15:45:39.763267 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerName="extract" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.763327 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerName="extract" Jan 21 15:45:39 crc kubenswrapper[4890]: E0121 15:45:39.763417 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerName="pull" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.763483 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerName="pull" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.763686 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f42f06-2b26-4f38-9fb3-653acad943d2" containerName="extract" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.764283 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.766482 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pnxft" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.766564 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.768626 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.779845 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-n8zvx"] Jan 21 15:45:39 crc kubenswrapper[4890]: I0121 15:45:39.926395 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2nxf\" (UniqueName: \"kubernetes.io/projected/829d883b-bd0f-40fb-bf6e-be3defd44399-kube-api-access-k2nxf\") pod \"nmstate-operator-646758c888-n8zvx\" (UID: \"829d883b-bd0f-40fb-bf6e-be3defd44399\") " pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" Jan 21 15:45:40 crc kubenswrapper[4890]: I0121 15:45:40.027959 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2nxf\" (UniqueName: \"kubernetes.io/projected/829d883b-bd0f-40fb-bf6e-be3defd44399-kube-api-access-k2nxf\") pod \"nmstate-operator-646758c888-n8zvx\" (UID: \"829d883b-bd0f-40fb-bf6e-be3defd44399\") " pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" Jan 21 15:45:40 crc kubenswrapper[4890]: I0121 15:45:40.046779 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2nxf\" (UniqueName: \"kubernetes.io/projected/829d883b-bd0f-40fb-bf6e-be3defd44399-kube-api-access-k2nxf\") pod \"nmstate-operator-646758c888-n8zvx\" (UID: \"829d883b-bd0f-40fb-bf6e-be3defd44399\") " pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" Jan 21 15:45:40 crc kubenswrapper[4890]: I0121 15:45:40.126984 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" Jan 21 15:45:40 crc kubenswrapper[4890]: I0121 15:45:40.535085 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-n8zvx"] Jan 21 15:45:41 crc kubenswrapper[4890]: I0121 15:45:41.324587 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" event={"ID":"829d883b-bd0f-40fb-bf6e-be3defd44399","Type":"ContainerStarted","Data":"25a28397182b33c0eb4e3c5d0e3388525574c2c784a85beb72615a24aaa3eafb"} Jan 21 15:45:41 crc kubenswrapper[4890]: I0121 15:45:41.825093 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:41 crc kubenswrapper[4890]: I0121 15:45:41.825213 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:41 crc kubenswrapper[4890]: I0121 15:45:41.877655 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:42 crc kubenswrapper[4890]: I0121 15:45:42.385672 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:43 crc kubenswrapper[4890]: I0121 15:45:43.336472 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" event={"ID":"829d883b-bd0f-40fb-bf6e-be3defd44399","Type":"ContainerStarted","Data":"c13878da877b6a79742431dc56027a7ee08c506b598f2b19b2d6746f65779e72"} Jan 21 15:45:43 crc kubenswrapper[4890]: I0121 15:45:43.354444 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-n8zvx" podStartSLOduration=2.3333237540000002 podStartE2EDuration="4.354426073s" podCreationTimestamp="2026-01-21 15:45:39 +0000 UTC" firstStartedPulling="2026-01-21 15:45:40.545927206 +0000 UTC m=+822.907369615" lastFinishedPulling="2026-01-21 15:45:42.567029525 +0000 UTC m=+824.928471934" observedRunningTime="2026-01-21 15:45:43.353923181 +0000 UTC m=+825.715365600" watchObservedRunningTime="2026-01-21 15:45:43.354426073 +0000 UTC m=+825.715868482" Jan 21 15:45:44 crc kubenswrapper[4890]: I0121 15:45:44.386733 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxbw5"] Jan 21 15:45:45 crc kubenswrapper[4890]: I0121 15:45:45.348681 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sxbw5" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="registry-server" containerID="cri-o://532e9f63cc8ebfc3853fe1ac2c985728be12b0b8a98cee28006c5a57ca81999a" gracePeriod=2 Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.370156 4890 generic.go:334] "Generic (PLEG): container finished" podID="0119d510-0122-4f7d-afde-efee6a7e7753" containerID="532e9f63cc8ebfc3853fe1ac2c985728be12b0b8a98cee28006c5a57ca81999a" exitCode=0 Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.370226 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbw5" event={"ID":"0119d510-0122-4f7d-afde-efee6a7e7753","Type":"ContainerDied","Data":"532e9f63cc8ebfc3853fe1ac2c985728be12b0b8a98cee28006c5a57ca81999a"} Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.762715 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.763282 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.763370 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.764283 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15c7eb35f58f393a9ceb7bc41b4e4e73eaeaf05b996fe213d725df9631b7a811"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.764392 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://15c7eb35f58f393a9ceb7bc41b4e4e73eaeaf05b996fe213d725df9631b7a811" gracePeriod=600 Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.786316 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.869975 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-utilities\") pod \"0119d510-0122-4f7d-afde-efee6a7e7753\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.870029 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-catalog-content\") pod \"0119d510-0122-4f7d-afde-efee6a7e7753\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.870056 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grtb9\" (UniqueName: \"kubernetes.io/projected/0119d510-0122-4f7d-afde-efee6a7e7753-kube-api-access-grtb9\") pod \"0119d510-0122-4f7d-afde-efee6a7e7753\" (UID: \"0119d510-0122-4f7d-afde-efee6a7e7753\") " Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.870834 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-utilities" (OuterVolumeSpecName: "utilities") pod "0119d510-0122-4f7d-afde-efee6a7e7753" (UID: "0119d510-0122-4f7d-afde-efee6a7e7753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.875581 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0119d510-0122-4f7d-afde-efee6a7e7753-kube-api-access-grtb9" (OuterVolumeSpecName: "kube-api-access-grtb9") pod "0119d510-0122-4f7d-afde-efee6a7e7753" (UID: "0119d510-0122-4f7d-afde-efee6a7e7753"). InnerVolumeSpecName "kube-api-access-grtb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.971935 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.971975 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grtb9\" (UniqueName: \"kubernetes.io/projected/0119d510-0122-4f7d-afde-efee6a7e7753-kube-api-access-grtb9\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:48 crc kubenswrapper[4890]: I0121 15:45:48.996818 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0119d510-0122-4f7d-afde-efee6a7e7753" (UID: "0119d510-0122-4f7d-afde-efee6a7e7753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.072879 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119d510-0122-4f7d-afde-efee6a7e7753-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.119334 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-fxsfm"] Jan 21 15:45:49 crc kubenswrapper[4890]: E0121 15:45:49.119564 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="registry-server" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.119577 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="registry-server" Jan 21 15:45:49 crc kubenswrapper[4890]: E0121 15:45:49.119592 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="extract-utilities" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.119598 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="extract-utilities" Jan 21 15:45:49 crc kubenswrapper[4890]: E0121 15:45:49.119608 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="extract-content" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.119614 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="extract-content" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.119692 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" containerName="registry-server" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.120206 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.124680 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-b5qxg" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.134053 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.134957 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.138868 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-fxsfm"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.141559 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.200904 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.210038 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-b2gxv"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.210875 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.275231 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7c56\" (UniqueName: \"kubernetes.io/projected/f7841b98-e096-4147-ba53-3a18f33d4c6b-kube-api-access-b7c56\") pod \"nmstate-metrics-54757c584b-fxsfm\" (UID: \"f7841b98-e096-4147-ba53-3a18f33d4c6b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.275332 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-dbus-socket\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.275371 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a2e8d43c-364c-4f94-a394-619cf820048c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t5bgt\" (UID: \"a2e8d43c-364c-4f94-a394-619cf820048c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.275406 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-nmstate-lock\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.275489 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh594\" (UniqueName: \"kubernetes.io/projected/a2e8d43c-364c-4f94-a394-619cf820048c-kube-api-access-dh594\") pod \"nmstate-webhook-8474b5b9d8-t5bgt\" (UID: \"a2e8d43c-364c-4f94-a394-619cf820048c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.376995 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7c56\" (UniqueName: \"kubernetes.io/projected/f7841b98-e096-4147-ba53-3a18f33d4c6b-kube-api-access-b7c56\") pod \"nmstate-metrics-54757c584b-fxsfm\" (UID: \"f7841b98-e096-4147-ba53-3a18f33d4c6b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.377396 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-dbus-socket\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.377424 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a2e8d43c-364c-4f94-a394-619cf820048c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t5bgt\" (UID: \"a2e8d43c-364c-4f94-a394-619cf820048c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.377659 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-nmstate-lock\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.377825 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-dbus-socket\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.378325 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-nmstate-lock\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.378380 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnv4n\" (UniqueName: \"kubernetes.io/projected/1655f2bc-b930-4937-94fb-2a9649e53af7-kube-api-access-bnv4n\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.378402 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-ovs-socket\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.378443 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh594\" (UniqueName: \"kubernetes.io/projected/a2e8d43c-364c-4f94-a394-619cf820048c-kube-api-access-dh594\") pod \"nmstate-webhook-8474b5b9d8-t5bgt\" (UID: \"a2e8d43c-364c-4f94-a394-619cf820048c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.381575 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.384718 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="15c7eb35f58f393a9ceb7bc41b4e4e73eaeaf05b996fe213d725df9631b7a811" exitCode=0 Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.384790 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"15c7eb35f58f393a9ceb7bc41b4e4e73eaeaf05b996fe213d725df9631b7a811"} Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.384817 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"b81d20500077e709078904e361919a2211cb0af68d145b245b901c65377ab4de"} Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.384834 4890 scope.go:117] "RemoveContainer" containerID="6d13e88a44e40f057930b863b94e86e1c511eca15fbc8041b56d03f36ff8a4f1" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.385935 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a2e8d43c-364c-4f94-a394-619cf820048c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t5bgt\" (UID: \"a2e8d43c-364c-4f94-a394-619cf820048c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.387670 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbw5" event={"ID":"0119d510-0122-4f7d-afde-efee6a7e7753","Type":"ContainerDied","Data":"4825dd3a5430f5c7984dc665b7d9a49b9ff399d5bd5a621fab6bb66a1d35c363"} Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.400054 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.400473 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.401830 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbw5" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.404118 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.405646 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.405981 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fq759" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.410849 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh594\" (UniqueName: \"kubernetes.io/projected/a2e8d43c-364c-4f94-a394-619cf820048c-kube-api-access-dh594\") pod \"nmstate-webhook-8474b5b9d8-t5bgt\" (UID: \"a2e8d43c-364c-4f94-a394-619cf820048c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.419140 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7c56\" (UniqueName: \"kubernetes.io/projected/f7841b98-e096-4147-ba53-3a18f33d4c6b-kube-api-access-b7c56\") pod \"nmstate-metrics-54757c584b-fxsfm\" (UID: \"f7841b98-e096-4147-ba53-3a18f33d4c6b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.474127 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxbw5"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.478258 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sxbw5"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.479076 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg4r9\" (UniqueName: \"kubernetes.io/projected/11c1f035-25b4-4626-b3ee-ead3153e9987-kube-api-access-xg4r9\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.479159 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11c1f035-25b4-4626-b3ee-ead3153e9987-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.479185 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnv4n\" (UniqueName: \"kubernetes.io/projected/1655f2bc-b930-4937-94fb-2a9649e53af7-kube-api-access-bnv4n\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.479206 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-ovs-socket\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.479246 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11c1f035-25b4-4626-b3ee-ead3153e9987-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.479549 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1655f2bc-b930-4937-94fb-2a9649e53af7-ovs-socket\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.482439 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.497783 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.507778 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnv4n\" (UniqueName: \"kubernetes.io/projected/1655f2bc-b930-4937-94fb-2a9649e53af7-kube-api-access-bnv4n\") pod \"nmstate-handler-b2gxv\" (UID: \"1655f2bc-b930-4937-94fb-2a9649e53af7\") " pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.530072 4890 scope.go:117] "RemoveContainer" containerID="532e9f63cc8ebfc3853fe1ac2c985728be12b0b8a98cee28006c5a57ca81999a" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.534648 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.559269 4890 scope.go:117] "RemoveContainer" containerID="d8c0bdd3784377b43b4a62c07d1cf097d19118d9890782d4d53db414d949abdc" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.581658 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11c1f035-25b4-4626-b3ee-ead3153e9987-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.581729 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg4r9\" (UniqueName: \"kubernetes.io/projected/11c1f035-25b4-4626-b3ee-ead3153e9987-kube-api-access-xg4r9\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.581792 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11c1f035-25b4-4626-b3ee-ead3153e9987-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.584345 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11c1f035-25b4-4626-b3ee-ead3153e9987-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.584512 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65dcb9588c-58r6z"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.585403 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.585589 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11c1f035-25b4-4626-b3ee-ead3153e9987-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.600981 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dcb9588c-58r6z"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.611493 4890 scope.go:117] "RemoveContainer" containerID="3c7d0df7425dd4eea415f86d96e08361115411ce5f22228e131ef44971aecb1d" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.623259 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg4r9\" (UniqueName: \"kubernetes.io/projected/11c1f035-25b4-4626-b3ee-ead3153e9987-kube-api-access-xg4r9\") pod \"nmstate-console-plugin-7754f76f8b-c252f\" (UID: \"11c1f035-25b4-4626-b3ee-ead3153e9987\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.684713 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f52451b-bc33-4021-9633-cc47a130d315-console-serving-cert\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.684789 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-console-config\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.684820 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p8h5\" (UniqueName: \"kubernetes.io/projected/9f52451b-bc33-4021-9633-cc47a130d315-kube-api-access-6p8h5\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.684861 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f52451b-bc33-4021-9633-cc47a130d315-console-oauth-config\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.684895 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-oauth-serving-cert\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.684919 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-trusted-ca-bundle\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.684944 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-service-ca\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.739861 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.755793 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-fxsfm"] Jan 21 15:45:49 crc kubenswrapper[4890]: W0121 15:45:49.769941 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7841b98_e096_4147_ba53_3a18f33d4c6b.slice/crio-238c3028af55a0861ff9daf6cc463fa1189bfec7ce9f73a0d7156aecb3373bff WatchSource:0}: Error finding container 238c3028af55a0861ff9daf6cc463fa1189bfec7ce9f73a0d7156aecb3373bff: Status 404 returned error can't find the container with id 238c3028af55a0861ff9daf6cc463fa1189bfec7ce9f73a0d7156aecb3373bff Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.787766 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f52451b-bc33-4021-9633-cc47a130d315-console-oauth-config\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.787841 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-oauth-serving-cert\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.787871 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-trusted-ca-bundle\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.787909 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-service-ca\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.787940 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f52451b-bc33-4021-9633-cc47a130d315-console-serving-cert\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.787987 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-console-config\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.788013 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p8h5\" (UniqueName: \"kubernetes.io/projected/9f52451b-bc33-4021-9633-cc47a130d315-kube-api-access-6p8h5\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.789252 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-service-ca\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.789252 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-console-config\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.790806 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-oauth-serving-cert\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.790810 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f52451b-bc33-4021-9633-cc47a130d315-trusted-ca-bundle\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.796520 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt"] Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.800975 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f52451b-bc33-4021-9633-cc47a130d315-console-oauth-config\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.802719 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f52451b-bc33-4021-9633-cc47a130d315-console-serving-cert\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.806234 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p8h5\" (UniqueName: \"kubernetes.io/projected/9f52451b-bc33-4021-9633-cc47a130d315-kube-api-access-6p8h5\") pod \"console-65dcb9588c-58r6z\" (UID: \"9f52451b-bc33-4021-9633-cc47a130d315\") " pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: W0121 15:45:49.809889 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2e8d43c_364c_4f94_a394_619cf820048c.slice/crio-066df0ca3085d0c1eb03e75c4f00bd55a2729d7c75ce08ff0579bcb39e304e9c WatchSource:0}: Error finding container 066df0ca3085d0c1eb03e75c4f00bd55a2729d7c75ce08ff0579bcb39e304e9c: Status 404 returned error can't find the container with id 066df0ca3085d0c1eb03e75c4f00bd55a2729d7c75ce08ff0579bcb39e304e9c Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.906412 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.921278 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0119d510-0122-4f7d-afde-efee6a7e7753" path="/var/lib/kubelet/pods/0119d510-0122-4f7d-afde-efee6a7e7753/volumes" Jan 21 15:45:49 crc kubenswrapper[4890]: I0121 15:45:49.955713 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f"] Jan 21 15:45:49 crc kubenswrapper[4890]: W0121 15:45:49.972430 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c1f035_25b4_4626_b3ee_ead3153e9987.slice/crio-303c58cc9e416a9ce0fa5080fb7eaf0f476457aa494894f7065d2926fb12a0e2 WatchSource:0}: Error finding container 303c58cc9e416a9ce0fa5080fb7eaf0f476457aa494894f7065d2926fb12a0e2: Status 404 returned error can't find the container with id 303c58cc9e416a9ce0fa5080fb7eaf0f476457aa494894f7065d2926fb12a0e2 Jan 21 15:45:50 crc kubenswrapper[4890]: I0121 15:45:50.094918 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dcb9588c-58r6z"] Jan 21 15:45:50 crc kubenswrapper[4890]: W0121 15:45:50.102586 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f52451b_bc33_4021_9633_cc47a130d315.slice/crio-3d6343dc8878047073300c983defddb94640af6dc0dada9fdc90033d29c3972b WatchSource:0}: Error finding container 3d6343dc8878047073300c983defddb94640af6dc0dada9fdc90033d29c3972b: Status 404 returned error can't find the container with id 3d6343dc8878047073300c983defddb94640af6dc0dada9fdc90033d29c3972b Jan 21 15:45:50 crc kubenswrapper[4890]: I0121 15:45:50.395295 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b2gxv" event={"ID":"1655f2bc-b930-4937-94fb-2a9649e53af7","Type":"ContainerStarted","Data":"f352345b3dc7071569bc8ba2fe49972e698ee5fab3d59bc79da45491ffb59cec"} Jan 21 15:45:50 crc kubenswrapper[4890]: I0121 15:45:50.396141 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" event={"ID":"a2e8d43c-364c-4f94-a394-619cf820048c","Type":"ContainerStarted","Data":"066df0ca3085d0c1eb03e75c4f00bd55a2729d7c75ce08ff0579bcb39e304e9c"} Jan 21 15:45:50 crc kubenswrapper[4890]: I0121 15:45:50.398608 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" event={"ID":"11c1f035-25b4-4626-b3ee-ead3153e9987","Type":"ContainerStarted","Data":"303c58cc9e416a9ce0fa5080fb7eaf0f476457aa494894f7065d2926fb12a0e2"} Jan 21 15:45:50 crc kubenswrapper[4890]: I0121 15:45:50.399524 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" event={"ID":"f7841b98-e096-4147-ba53-3a18f33d4c6b","Type":"ContainerStarted","Data":"238c3028af55a0861ff9daf6cc463fa1189bfec7ce9f73a0d7156aecb3373bff"} Jan 21 15:45:50 crc kubenswrapper[4890]: I0121 15:45:50.402871 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dcb9588c-58r6z" event={"ID":"9f52451b-bc33-4021-9633-cc47a130d315","Type":"ContainerStarted","Data":"3d6343dc8878047073300c983defddb94640af6dc0dada9fdc90033d29c3972b"} Jan 21 15:45:51 crc kubenswrapper[4890]: I0121 15:45:51.410093 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dcb9588c-58r6z" event={"ID":"9f52451b-bc33-4021-9633-cc47a130d315","Type":"ContainerStarted","Data":"c48cea8601f54b62d1d490be3e9463674d8a76923c604ae142b6dbb3330784ea"} Jan 21 15:45:51 crc kubenswrapper[4890]: I0121 15:45:51.433464 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65dcb9588c-58r6z" podStartSLOduration=2.433442728 podStartE2EDuration="2.433442728s" podCreationTimestamp="2026-01-21 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:45:51.42948076 +0000 UTC m=+833.790923179" watchObservedRunningTime="2026-01-21 15:45:51.433442728 +0000 UTC m=+833.794885137" Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.430238 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" event={"ID":"f7841b98-e096-4147-ba53-3a18f33d4c6b","Type":"ContainerStarted","Data":"27204a2846cb8b5cdb5ef33b92670690de05f05ee0ed389209250b9aae627aae"} Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.432149 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b2gxv" event={"ID":"1655f2bc-b930-4937-94fb-2a9649e53af7","Type":"ContainerStarted","Data":"4aa349df9a2d58980f72d4773073c8b9702cbbdaff6964fcedc8c94b19e1cfe4"} Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.432247 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.433253 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" event={"ID":"a2e8d43c-364c-4f94-a394-619cf820048c","Type":"ContainerStarted","Data":"a48996003b67577cea9fae5399dde551f62acaedd0d151edf42f498160a5a7ed"} Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.433344 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.434437 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" event={"ID":"11c1f035-25b4-4626-b3ee-ead3153e9987","Type":"ContainerStarted","Data":"98d6e9f2caf92d5ddd65d326ba4e1066a87597025d665405916496b1d6088042"} Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.451936 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-b2gxv" podStartSLOduration=1.429683595 podStartE2EDuration="5.451918241s" podCreationTimestamp="2026-01-21 15:45:49 +0000 UTC" firstStartedPulling="2026-01-21 15:45:49.623056082 +0000 UTC m=+831.984498491" lastFinishedPulling="2026-01-21 15:45:53.645290718 +0000 UTC m=+836.006733137" observedRunningTime="2026-01-21 15:45:54.446426465 +0000 UTC m=+836.807868884" watchObservedRunningTime="2026-01-21 15:45:54.451918241 +0000 UTC m=+836.813360660" Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.462844 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" podStartSLOduration=1.700512574 podStartE2EDuration="5.46282898s" podCreationTimestamp="2026-01-21 15:45:49 +0000 UTC" firstStartedPulling="2026-01-21 15:45:49.812233625 +0000 UTC m=+832.173676034" lastFinishedPulling="2026-01-21 15:45:53.574550021 +0000 UTC m=+835.935992440" observedRunningTime="2026-01-21 15:45:54.462759639 +0000 UTC m=+836.824202058" watchObservedRunningTime="2026-01-21 15:45:54.46282898 +0000 UTC m=+836.824271399" Jan 21 15:45:54 crc kubenswrapper[4890]: I0121 15:45:54.481417 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c252f" podStartSLOduration=1.890218751 podStartE2EDuration="5.481342068s" podCreationTimestamp="2026-01-21 15:45:49 +0000 UTC" firstStartedPulling="2026-01-21 15:45:49.982759107 +0000 UTC m=+832.344201516" lastFinishedPulling="2026-01-21 15:45:53.573882424 +0000 UTC m=+835.935324833" observedRunningTime="2026-01-21 15:45:54.476631181 +0000 UTC m=+836.838073600" watchObservedRunningTime="2026-01-21 15:45:54.481342068 +0000 UTC m=+836.842784497" Jan 21 15:45:57 crc kubenswrapper[4890]: I0121 15:45:57.466061 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" event={"ID":"f7841b98-e096-4147-ba53-3a18f33d4c6b","Type":"ContainerStarted","Data":"9a8331c70b6add88b2b468ad1aca3ef96ec21fcc4a6d4714b9ba769982cd61e7"} Jan 21 15:45:57 crc kubenswrapper[4890]: I0121 15:45:57.495990 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-fxsfm" podStartSLOduration=1.976295787 podStartE2EDuration="8.49569624s" podCreationTimestamp="2026-01-21 15:45:49 +0000 UTC" firstStartedPulling="2026-01-21 15:45:49.77153122 +0000 UTC m=+832.132973629" lastFinishedPulling="2026-01-21 15:45:56.290931673 +0000 UTC m=+838.652374082" observedRunningTime="2026-01-21 15:45:57.487751563 +0000 UTC m=+839.849194012" watchObservedRunningTime="2026-01-21 15:45:57.49569624 +0000 UTC m=+839.857138659" Jan 21 15:45:59 crc kubenswrapper[4890]: I0121 15:45:59.569384 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-b2gxv" Jan 21 15:45:59 crc kubenswrapper[4890]: I0121 15:45:59.907106 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:59 crc kubenswrapper[4890]: I0121 15:45:59.907189 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:45:59 crc kubenswrapper[4890]: I0121 15:45:59.925647 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:46:00 crc kubenswrapper[4890]: I0121 15:46:00.489918 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dcb9588c-58r6z" Jan 21 15:46:00 crc kubenswrapper[4890]: I0121 15:46:00.553078 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vq4s5"] Jan 21 15:46:09 crc kubenswrapper[4890]: I0121 15:46:09.504568 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t5bgt" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.143042 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9"] Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.145712 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.148182 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.153163 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9"] Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.220280 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.220328 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.220364 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqd4g\" (UniqueName: \"kubernetes.io/projected/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-kube-api-access-hqd4g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.322210 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.322303 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.322332 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqd4g\" (UniqueName: \"kubernetes.io/projected/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-kube-api-access-hqd4g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.322839 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.323018 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.347919 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqd4g\" (UniqueName: \"kubernetes.io/projected/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-kube-api-access-hqd4g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.477722 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:22 crc kubenswrapper[4890]: I0121 15:46:22.672520 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9"] Jan 21 15:46:23 crc kubenswrapper[4890]: I0121 15:46:23.631964 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" event={"ID":"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8","Type":"ContainerStarted","Data":"7da906cce4361b9cf09216b4a993b6b2f4f3fdd90d9e440b6c6988eac37e39f3"} Jan 21 15:46:24 crc kubenswrapper[4890]: I0121 15:46:24.639012 4890 generic.go:334] "Generic (PLEG): container finished" podID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerID="bda11e3a3859d2a8bb87c064fc83736380c7590a9c855bcf11074e9fedd1a1a4" exitCode=0 Jan 21 15:46:24 crc kubenswrapper[4890]: I0121 15:46:24.639084 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" event={"ID":"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8","Type":"ContainerDied","Data":"bda11e3a3859d2a8bb87c064fc83736380c7590a9c855bcf11074e9fedd1a1a4"} Jan 21 15:46:25 crc kubenswrapper[4890]: I0121 15:46:25.609392 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vq4s5" podUID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" containerName="console" containerID="cri-o://5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6" gracePeriod=15 Jan 21 15:46:25 crc kubenswrapper[4890]: I0121 15:46:25.967870 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vq4s5_b91d73c6-e6ae-4496-bf1d-a00f1518e5ed/console/0.log" Jan 21 15:46:25 crc kubenswrapper[4890]: I0121 15:46:25.968432 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.071976 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc8gc\" (UniqueName: \"kubernetes.io/projected/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-kube-api-access-xc8gc\") pod \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072037 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-config\") pod \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072059 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-trusted-ca-bundle\") pod \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072076 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-serving-cert\") pod \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072105 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-oauth-config\") pod \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072126 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-oauth-serving-cert\") pod \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072162 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-service-ca\") pod \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\" (UID: \"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed\") " Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072820 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-service-ca" (OuterVolumeSpecName: "service-ca") pod "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" (UID: "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072836 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-config" (OuterVolumeSpecName: "console-config") pod "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" (UID: "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.072857 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" (UID: "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.073436 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" (UID: "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.084806 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" (UID: "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.086662 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-kube-api-access-xc8gc" (OuterVolumeSpecName: "kube-api-access-xc8gc") pod "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" (UID: "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed"). InnerVolumeSpecName "kube-api-access-xc8gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.091409 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" (UID: "b91d73c6-e6ae-4496-bf1d-a00f1518e5ed"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.172985 4890 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.173020 4890 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.173033 4890 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.173046 4890 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.173058 4890 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.173068 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc8gc\" (UniqueName: \"kubernetes.io/projected/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-kube-api-access-xc8gc\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.173082 4890 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.655303 4890 generic.go:334] "Generic (PLEG): container finished" podID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerID="c02c53201f8d79ebbd1a34f5563fb8571d7778870cf13d1e9d9595d59d20af63" exitCode=0 Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.655403 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" event={"ID":"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8","Type":"ContainerDied","Data":"c02c53201f8d79ebbd1a34f5563fb8571d7778870cf13d1e9d9595d59d20af63"} Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.658834 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vq4s5_b91d73c6-e6ae-4496-bf1d-a00f1518e5ed/console/0.log" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.658898 4890 generic.go:334] "Generic (PLEG): container finished" podID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" containerID="5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6" exitCode=2 Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.658947 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vq4s5" event={"ID":"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed","Type":"ContainerDied","Data":"5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6"} Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.658990 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vq4s5" event={"ID":"b91d73c6-e6ae-4496-bf1d-a00f1518e5ed","Type":"ContainerDied","Data":"d7f64b8dc9567d9f65c216f3343aa1286ddeef75e9fb48e28b0d35a9bacb860d"} Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.659025 4890 scope.go:117] "RemoveContainer" containerID="5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.659310 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vq4s5" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.685844 4890 scope.go:117] "RemoveContainer" containerID="5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6" Jan 21 15:46:26 crc kubenswrapper[4890]: E0121 15:46:26.686417 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6\": container with ID starting with 5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6 not found: ID does not exist" containerID="5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.686511 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6"} err="failed to get container status \"5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6\": rpc error: code = NotFound desc = could not find container \"5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6\": container with ID starting with 5497c6acf397d94dd9be8613a32b6369e4245205bb8c98b64b5dc0794cb95af6 not found: ID does not exist" Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.703280 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vq4s5"] Jan 21 15:46:26 crc kubenswrapper[4890]: I0121 15:46:26.706831 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vq4s5"] Jan 21 15:46:27 crc kubenswrapper[4890]: I0121 15:46:27.667008 4890 generic.go:334] "Generic (PLEG): container finished" podID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerID="8429058496fe858d83994e61b14942654f804eff2e4bc9ad1fad2b0907b22d12" exitCode=0 Jan 21 15:46:27 crc kubenswrapper[4890]: I0121 15:46:27.667070 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" event={"ID":"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8","Type":"ContainerDied","Data":"8429058496fe858d83994e61b14942654f804eff2e4bc9ad1fad2b0907b22d12"} Jan 21 15:46:27 crc kubenswrapper[4890]: I0121 15:46:27.922007 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" path="/var/lib/kubelet/pods/b91d73c6-e6ae-4496-bf1d-a00f1518e5ed/volumes" Jan 21 15:46:28 crc kubenswrapper[4890]: I0121 15:46:28.910281 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.012633 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqd4g\" (UniqueName: \"kubernetes.io/projected/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-kube-api-access-hqd4g\") pod \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.012752 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-bundle\") pod \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.012819 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-util\") pod \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\" (UID: \"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8\") " Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.014070 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-bundle" (OuterVolumeSpecName: "bundle") pod "4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" (UID: "4d8a46ee-3e93-4f73-a9bf-f0f797698cc8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.017236 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-kube-api-access-hqd4g" (OuterVolumeSpecName: "kube-api-access-hqd4g") pod "4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" (UID: "4d8a46ee-3e93-4f73-a9bf-f0f797698cc8"). InnerVolumeSpecName "kube-api-access-hqd4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.112574 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-util" (OuterVolumeSpecName: "util") pod "4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" (UID: "4d8a46ee-3e93-4f73-a9bf-f0f797698cc8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.114081 4890 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.114117 4890 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.114130 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqd4g\" (UniqueName: \"kubernetes.io/projected/4d8a46ee-3e93-4f73-a9bf-f0f797698cc8-kube-api-access-hqd4g\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.684480 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" event={"ID":"4d8a46ee-3e93-4f73-a9bf-f0f797698cc8","Type":"ContainerDied","Data":"7da906cce4361b9cf09216b4a993b6b2f4f3fdd90d9e440b6c6988eac37e39f3"} Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.684519 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9" Jan 21 15:46:29 crc kubenswrapper[4890]: I0121 15:46:29.684534 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da906cce4361b9cf09216b4a993b6b2f4f3fdd90d9e440b6c6988eac37e39f3" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.772534 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84"] Jan 21 15:46:40 crc kubenswrapper[4890]: E0121 15:46:40.773998 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" containerName="console" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.774021 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" containerName="console" Jan 21 15:46:40 crc kubenswrapper[4890]: E0121 15:46:40.774034 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerName="util" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.774043 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerName="util" Jan 21 15:46:40 crc kubenswrapper[4890]: E0121 15:46:40.774055 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerName="extract" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.774064 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerName="extract" Jan 21 15:46:40 crc kubenswrapper[4890]: E0121 15:46:40.774095 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerName="pull" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.774103 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerName="pull" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.774222 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91d73c6-e6ae-4496-bf1d-a00f1518e5ed" containerName="console" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.774243 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d8a46ee-3e93-4f73-a9bf-f0f797698cc8" containerName="extract" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.775011 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.782338 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.785378 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.786114 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.786301 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.786485 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d97q5" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.800004 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84"] Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.859719 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rph27\" (UniqueName: \"kubernetes.io/projected/2a07a757-38cf-476f-acf2-758e953dc057-kube-api-access-rph27\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.859794 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a07a757-38cf-476f-acf2-758e953dc057-webhook-cert\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.859891 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a07a757-38cf-476f-acf2-758e953dc057-apiservice-cert\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.961031 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rph27\" (UniqueName: \"kubernetes.io/projected/2a07a757-38cf-476f-acf2-758e953dc057-kube-api-access-rph27\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.961322 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a07a757-38cf-476f-acf2-758e953dc057-webhook-cert\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.961490 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a07a757-38cf-476f-acf2-758e953dc057-apiservice-cert\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.969155 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a07a757-38cf-476f-acf2-758e953dc057-apiservice-cert\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.978627 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a07a757-38cf-476f-acf2-758e953dc057-webhook-cert\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:40 crc kubenswrapper[4890]: I0121 15:46:40.981069 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rph27\" (UniqueName: \"kubernetes.io/projected/2a07a757-38cf-476f-acf2-758e953dc057-kube-api-access-rph27\") pod \"metallb-operator-controller-manager-66c89b74b6-f8w84\" (UID: \"2a07a757-38cf-476f-acf2-758e953dc057\") " pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.097325 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.162329 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6"] Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.163685 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.180055 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xqvpl" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.180266 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.180334 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.189738 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6"] Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.365254 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-webhook-cert\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.365296 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjv4t\" (UniqueName: \"kubernetes.io/projected/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-kube-api-access-fjv4t\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.365378 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-apiservice-cert\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.467539 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-webhook-cert\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.467588 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjv4t\" (UniqueName: \"kubernetes.io/projected/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-kube-api-access-fjv4t\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.467674 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-apiservice-cert\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.474034 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-apiservice-cert\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.474819 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-webhook-cert\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.488372 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjv4t\" (UniqueName: \"kubernetes.io/projected/9751dff3-fe38-4186-9f8c-e8aa4a08cf6d-kube-api-access-fjv4t\") pod \"metallb-operator-webhook-server-5c9dbfd5c5-cnkj6\" (UID: \"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d\") " pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.493215 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.658788 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84"] Jan 21 15:46:41 crc kubenswrapper[4890]: W0121 15:46:41.670484 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a07a757_38cf_476f_acf2_758e953dc057.slice/crio-3197ddc110204c0808506976f8a54ba806ae928b2c0d19d7de98c111568a74a2 WatchSource:0}: Error finding container 3197ddc110204c0808506976f8a54ba806ae928b2c0d19d7de98c111568a74a2: Status 404 returned error can't find the container with id 3197ddc110204c0808506976f8a54ba806ae928b2c0d19d7de98c111568a74a2 Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.754180 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6"] Jan 21 15:46:41 crc kubenswrapper[4890]: W0121 15:46:41.760393 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9751dff3_fe38_4186_9f8c_e8aa4a08cf6d.slice/crio-fdaddc9b1dbcc6d2ac0193c5753e27bd1c71815cf0d53966b7e33d1ef9cdfdbf WatchSource:0}: Error finding container fdaddc9b1dbcc6d2ac0193c5753e27bd1c71815cf0d53966b7e33d1ef9cdfdbf: Status 404 returned error can't find the container with id fdaddc9b1dbcc6d2ac0193c5753e27bd1c71815cf0d53966b7e33d1ef9cdfdbf Jan 21 15:46:41 crc kubenswrapper[4890]: I0121 15:46:41.766891 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" event={"ID":"2a07a757-38cf-476f-acf2-758e953dc057","Type":"ContainerStarted","Data":"3197ddc110204c0808506976f8a54ba806ae928b2c0d19d7de98c111568a74a2"} Jan 21 15:46:42 crc kubenswrapper[4890]: I0121 15:46:42.773129 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" event={"ID":"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d","Type":"ContainerStarted","Data":"fdaddc9b1dbcc6d2ac0193c5753e27bd1c71815cf0d53966b7e33d1ef9cdfdbf"} Jan 21 15:46:48 crc kubenswrapper[4890]: I0121 15:46:48.810896 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" event={"ID":"9751dff3-fe38-4186-9f8c-e8aa4a08cf6d","Type":"ContainerStarted","Data":"e309ca222f80229a3c17d8e0aaaaf74ae49eec6dd7cac286933d84fcf8852815"} Jan 21 15:46:48 crc kubenswrapper[4890]: I0121 15:46:48.811537 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:46:48 crc kubenswrapper[4890]: I0121 15:46:48.813213 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" event={"ID":"2a07a757-38cf-476f-acf2-758e953dc057","Type":"ContainerStarted","Data":"3b1c66047544ee2708df0beb0b27827c99b16cfc04d117bafb15f3e6b0f38f35"} Jan 21 15:46:48 crc kubenswrapper[4890]: I0121 15:46:48.813277 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:46:48 crc kubenswrapper[4890]: I0121 15:46:48.838554 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" podStartSLOduration=1.609569908 podStartE2EDuration="7.838534351s" podCreationTimestamp="2026-01-21 15:46:41 +0000 UTC" firstStartedPulling="2026-01-21 15:46:41.763939204 +0000 UTC m=+884.125381603" lastFinishedPulling="2026-01-21 15:46:47.992903637 +0000 UTC m=+890.354346046" observedRunningTime="2026-01-21 15:46:48.837633298 +0000 UTC m=+891.199075697" watchObservedRunningTime="2026-01-21 15:46:48.838534351 +0000 UTC m=+891.199976760" Jan 21 15:46:48 crc kubenswrapper[4890]: I0121 15:46:48.871155 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" podStartSLOduration=2.582578656 podStartE2EDuration="8.871133994s" podCreationTimestamp="2026-01-21 15:46:40 +0000 UTC" firstStartedPulling="2026-01-21 15:46:41.672165376 +0000 UTC m=+884.033607785" lastFinishedPulling="2026-01-21 15:46:47.960720714 +0000 UTC m=+890.322163123" observedRunningTime="2026-01-21 15:46:48.867642966 +0000 UTC m=+891.229085365" watchObservedRunningTime="2026-01-21 15:46:48.871133994 +0000 UTC m=+891.232576403" Jan 21 15:47:01 crc kubenswrapper[4890]: I0121 15:47:01.499877 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5c9dbfd5c5-cnkj6" Jan 21 15:47:10 crc kubenswrapper[4890]: I0121 15:47:10.956999 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bq2t7"] Jan 21 15:47:10 crc kubenswrapper[4890]: I0121 15:47:10.958794 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:10 crc kubenswrapper[4890]: I0121 15:47:10.970527 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq2t7"] Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.093020 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-catalog-content\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.093085 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jrc\" (UniqueName: \"kubernetes.io/projected/b490b293-5c64-4424-bfca-4fb8ba23ef0f-kube-api-access-l8jrc\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.093121 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-utilities\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.194701 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-catalog-content\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.194779 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jrc\" (UniqueName: \"kubernetes.io/projected/b490b293-5c64-4424-bfca-4fb8ba23ef0f-kube-api-access-l8jrc\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.194817 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-utilities\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.195306 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-catalog-content\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.195371 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-utilities\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.222171 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jrc\" (UniqueName: \"kubernetes.io/projected/b490b293-5c64-4424-bfca-4fb8ba23ef0f-kube-api-access-l8jrc\") pod \"redhat-marketplace-bq2t7\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.275201 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.505934 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq2t7"] Jan 21 15:47:11 crc kubenswrapper[4890]: I0121 15:47:11.959384 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq2t7" event={"ID":"b490b293-5c64-4424-bfca-4fb8ba23ef0f","Type":"ContainerStarted","Data":"780ecdfa135c6591e02a03199f04749a7dba5051aed928f8a019060856d1b05d"} Jan 21 15:47:13 crc kubenswrapper[4890]: I0121 15:47:13.972968 4890 generic.go:334] "Generic (PLEG): container finished" podID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerID="82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c" exitCode=0 Jan 21 15:47:13 crc kubenswrapper[4890]: I0121 15:47:13.973031 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq2t7" event={"ID":"b490b293-5c64-4424-bfca-4fb8ba23ef0f","Type":"ContainerDied","Data":"82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c"} Jan 21 15:47:15 crc kubenswrapper[4890]: I0121 15:47:15.988762 4890 generic.go:334] "Generic (PLEG): container finished" podID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerID="6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd" exitCode=0 Jan 21 15:47:15 crc kubenswrapper[4890]: I0121 15:47:15.988856 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq2t7" event={"ID":"b490b293-5c64-4424-bfca-4fb8ba23ef0f","Type":"ContainerDied","Data":"6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd"} Jan 21 15:47:16 crc kubenswrapper[4890]: I0121 15:47:16.996919 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq2t7" event={"ID":"b490b293-5c64-4424-bfca-4fb8ba23ef0f","Type":"ContainerStarted","Data":"816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555"} Jan 21 15:47:17 crc kubenswrapper[4890]: I0121 15:47:17.023151 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bq2t7" podStartSLOduration=4.456150017 podStartE2EDuration="7.023132984s" podCreationTimestamp="2026-01-21 15:47:10 +0000 UTC" firstStartedPulling="2026-01-21 15:47:13.975285617 +0000 UTC m=+916.336728026" lastFinishedPulling="2026-01-21 15:47:16.542268574 +0000 UTC m=+918.903710993" observedRunningTime="2026-01-21 15:47:17.01892325 +0000 UTC m=+919.380365679" watchObservedRunningTime="2026-01-21 15:47:17.023132984 +0000 UTC m=+919.384575413" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.100404 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-66c89b74b6-f8w84" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.276268 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.276329 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.317788 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.840477 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk"] Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.841921 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.849804 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gvndq" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.850001 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.852458 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-v7hrv"] Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.855577 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.856825 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk"] Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.858336 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.858664 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.929312 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-f25hv"] Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.930258 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-f25hv" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.934205 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.934553 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.934639 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-z6xnk"] Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.934733 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.934786 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bqpms" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.935785 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.937700 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 15:47:21 crc kubenswrapper[4890]: I0121 15:47:21.943782 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-z6xnk"] Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039583 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-cert\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039665 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-conf\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039720 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-reloader\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039819 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-metrics-certs\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039849 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw8d8\" (UniqueName: \"kubernetes.io/projected/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-kube-api-access-cw8d8\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039878 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039904 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6klc\" (UniqueName: \"kubernetes.io/projected/d1f32d47-a574-442b-be16-9cdb86b30aa8-kube-api-access-m6klc\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039928 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-metrics-certs\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.039955 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhlqf\" (UniqueName: \"kubernetes.io/projected/7c1ba220-b8cb-40b8-ae09-e94423a126a5-kube-api-access-bhlqf\") pod \"frr-k8s-webhook-server-7df86c4f6c-dqvjk\" (UID: \"7c1ba220-b8cb-40b8-ae09-e94423a126a5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.040019 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-sockets\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.040045 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-startup\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.040067 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-metallb-excludel2\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.040126 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-kube-api-access-k9vmt\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.040178 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c1ba220-b8cb-40b8-ae09-e94423a126a5-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dqvjk\" (UID: \"7c1ba220-b8cb-40b8-ae09-e94423a126a5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.040239 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics-certs\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.040265 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.066906 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.141948 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-sockets\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142045 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-startup\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142073 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-metallb-excludel2\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142112 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-kube-api-access-k9vmt\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142156 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c1ba220-b8cb-40b8-ae09-e94423a126a5-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dqvjk\" (UID: \"7c1ba220-b8cb-40b8-ae09-e94423a126a5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142197 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics-certs\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142221 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142249 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-cert\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142267 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-conf\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142296 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-reloader\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142328 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-metrics-certs\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142367 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw8d8\" (UniqueName: \"kubernetes.io/projected/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-kube-api-access-cw8d8\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142389 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142406 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6klc\" (UniqueName: \"kubernetes.io/projected/d1f32d47-a574-442b-be16-9cdb86b30aa8-kube-api-access-m6klc\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142431 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-metrics-certs\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142462 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhlqf\" (UniqueName: \"kubernetes.io/projected/7c1ba220-b8cb-40b8-ae09-e94423a126a5-kube-api-access-bhlqf\") pod \"frr-k8s-webhook-server-7df86c4f6c-dqvjk\" (UID: \"7c1ba220-b8cb-40b8-ae09-e94423a126a5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142684 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-sockets\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.142922 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-conf\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: E0121 15:47:22.143149 4890 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 15:47:22 crc kubenswrapper[4890]: E0121 15:47:22.143268 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist podName:de8bead9-3c0f-4ba7-908b-c8c21763eb7c nodeName:}" failed. No retries permitted until 2026-01-21 15:47:22.643249476 +0000 UTC m=+925.004691885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist") pod "speaker-f25hv" (UID: "de8bead9-3c0f-4ba7-908b-c8c21763eb7c") : secret "metallb-memberlist" not found Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.143290 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-reloader\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.143591 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-metallb-excludel2\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.143611 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d1f32d47-a574-442b-be16-9cdb86b30aa8-frr-startup\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.143658 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: E0121 15:47:22.143810 4890 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 21 15:47:22 crc kubenswrapper[4890]: E0121 15:47:22.143905 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics-certs podName:d1f32d47-a574-442b-be16-9cdb86b30aa8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:22.643871232 +0000 UTC m=+925.005313841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics-certs") pod "frr-k8s-v7hrv" (UID: "d1f32d47-a574-442b-be16-9cdb86b30aa8") : secret "frr-k8s-certs-secret" not found Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.147096 4890 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.153146 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-metrics-certs\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.156803 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c1ba220-b8cb-40b8-ae09-e94423a126a5-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dqvjk\" (UID: \"7c1ba220-b8cb-40b8-ae09-e94423a126a5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.168692 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-cert\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.175838 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-metrics-certs\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.178131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6klc\" (UniqueName: \"kubernetes.io/projected/d1f32d47-a574-442b-be16-9cdb86b30aa8-kube-api-access-m6klc\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.180078 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9-kube-api-access-k9vmt\") pod \"controller-6968d8fdc4-z6xnk\" (UID: \"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9\") " pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.186373 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq2t7"] Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.197253 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhlqf\" (UniqueName: \"kubernetes.io/projected/7c1ba220-b8cb-40b8-ae09-e94423a126a5-kube-api-access-bhlqf\") pod \"frr-k8s-webhook-server-7df86c4f6c-dqvjk\" (UID: \"7c1ba220-b8cb-40b8-ae09-e94423a126a5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.203975 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw8d8\" (UniqueName: \"kubernetes.io/projected/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-kube-api-access-cw8d8\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.255671 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.468162 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.650326 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk"] Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.652997 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics-certs\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.653132 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:22 crc kubenswrapper[4890]: E0121 15:47:22.653268 4890 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 15:47:22 crc kubenswrapper[4890]: E0121 15:47:22.653326 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist podName:de8bead9-3c0f-4ba7-908b-c8c21763eb7c nodeName:}" failed. No retries permitted until 2026-01-21 15:47:23.653308678 +0000 UTC m=+926.014751097 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist") pod "speaker-f25hv" (UID: "de8bead9-3c0f-4ba7-908b-c8c21763eb7c") : secret "metallb-memberlist" not found Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.658708 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f32d47-a574-442b-be16-9cdb86b30aa8-metrics-certs\") pod \"frr-k8s-v7hrv\" (UID: \"d1f32d47-a574-442b-be16-9cdb86b30aa8\") " pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.668124 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-z6xnk"] Jan 21 15:47:22 crc kubenswrapper[4890]: I0121 15:47:22.783004 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:23 crc kubenswrapper[4890]: I0121 15:47:23.028558 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z6xnk" event={"ID":"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9","Type":"ContainerStarted","Data":"c13cfa85bd6e4ab4fe950a497a250ea9b14e962216d1b4c0261be58dd55b75de"} Jan 21 15:47:23 crc kubenswrapper[4890]: I0121 15:47:23.030011 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" event={"ID":"7c1ba220-b8cb-40b8-ae09-e94423a126a5","Type":"ContainerStarted","Data":"3d7508ef119fc99d844b429d3cd63afd9cd45ea26391b0649e3a7e02781cd6d6"} Jan 21 15:47:23 crc kubenswrapper[4890]: I0121 15:47:23.670328 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:23 crc kubenswrapper[4890]: I0121 15:47:23.676896 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/de8bead9-3c0f-4ba7-908b-c8c21763eb7c-memberlist\") pod \"speaker-f25hv\" (UID: \"de8bead9-3c0f-4ba7-908b-c8c21763eb7c\") " pod="metallb-system/speaker-f25hv" Jan 21 15:47:23 crc kubenswrapper[4890]: I0121 15:47:23.747199 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-f25hv" Jan 21 15:47:23 crc kubenswrapper[4890]: W0121 15:47:23.768493 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde8bead9_3c0f_4ba7_908b_c8c21763eb7c.slice/crio-ce71801111471cc6554905d08f68fc503d37fec97e06af72381a177fbff757c9 WatchSource:0}: Error finding container ce71801111471cc6554905d08f68fc503d37fec97e06af72381a177fbff757c9: Status 404 returned error can't find the container with id ce71801111471cc6554905d08f68fc503d37fec97e06af72381a177fbff757c9 Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.047618 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z6xnk" event={"ID":"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9","Type":"ContainerStarted","Data":"674131e33439d7a3682e3977e1c10596e841a81c72d7c12db7e89f840edde9fa"} Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.047663 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z6xnk" event={"ID":"cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9","Type":"ContainerStarted","Data":"32bd3b5c0b97ac3faa0e0c46ceed37b0ddffb97198b3258efe1a8d0d2fed9e0b"} Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.048490 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.049512 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerStarted","Data":"d389a228ba102fd38380322dd4aa4ec2d4f30d8cd110c718a9919a587f12d33f"} Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.059893 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f25hv" event={"ID":"de8bead9-3c0f-4ba7-908b-c8c21763eb7c","Type":"ContainerStarted","Data":"ddf01e7cd9b29a25055c4d817c7d792e7016234e23fbea6e8d03b9801f7c0500"} Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.059952 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f25hv" event={"ID":"de8bead9-3c0f-4ba7-908b-c8c21763eb7c","Type":"ContainerStarted","Data":"ce71801111471cc6554905d08f68fc503d37fec97e06af72381a177fbff757c9"} Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.059964 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bq2t7" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="registry-server" containerID="cri-o://816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555" gracePeriod=2 Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.069085 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-z6xnk" podStartSLOduration=3.069062034 podStartE2EDuration="3.069062034s" podCreationTimestamp="2026-01-21 15:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:24.067489955 +0000 UTC m=+926.428932364" watchObservedRunningTime="2026-01-21 15:47:24.069062034 +0000 UTC m=+926.430504443" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.574765 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.695157 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-catalog-content\") pod \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.695200 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-utilities\") pod \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.695230 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8jrc\" (UniqueName: \"kubernetes.io/projected/b490b293-5c64-4424-bfca-4fb8ba23ef0f-kube-api-access-l8jrc\") pod \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\" (UID: \"b490b293-5c64-4424-bfca-4fb8ba23ef0f\") " Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.696652 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-utilities" (OuterVolumeSpecName: "utilities") pod "b490b293-5c64-4424-bfca-4fb8ba23ef0f" (UID: "b490b293-5c64-4424-bfca-4fb8ba23ef0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.702819 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b490b293-5c64-4424-bfca-4fb8ba23ef0f-kube-api-access-l8jrc" (OuterVolumeSpecName: "kube-api-access-l8jrc") pod "b490b293-5c64-4424-bfca-4fb8ba23ef0f" (UID: "b490b293-5c64-4424-bfca-4fb8ba23ef0f"). InnerVolumeSpecName "kube-api-access-l8jrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.721582 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b490b293-5c64-4424-bfca-4fb8ba23ef0f" (UID: "b490b293-5c64-4424-bfca-4fb8ba23ef0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.797689 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.797723 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b490b293-5c64-4424-bfca-4fb8ba23ef0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:24 crc kubenswrapper[4890]: I0121 15:47:24.797733 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8jrc\" (UniqueName: \"kubernetes.io/projected/b490b293-5c64-4424-bfca-4fb8ba23ef0f-kube-api-access-l8jrc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.066754 4890 generic.go:334] "Generic (PLEG): container finished" podID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerID="816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555" exitCode=0 Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.066842 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq2t7" event={"ID":"b490b293-5c64-4424-bfca-4fb8ba23ef0f","Type":"ContainerDied","Data":"816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555"} Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.066902 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq2t7" event={"ID":"b490b293-5c64-4424-bfca-4fb8ba23ef0f","Type":"ContainerDied","Data":"780ecdfa135c6591e02a03199f04749a7dba5051aed928f8a019060856d1b05d"} Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.066923 4890 scope.go:117] "RemoveContainer" containerID="816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.067062 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq2t7" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.069553 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f25hv" event={"ID":"de8bead9-3c0f-4ba7-908b-c8c21763eb7c","Type":"ContainerStarted","Data":"a7c20db3a5240bfc1d4b1cdb72a91c2023a2d2de0147ebcbad4a63396db6a609"} Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.090425 4890 scope.go:117] "RemoveContainer" containerID="6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.099660 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-f25hv" podStartSLOduration=4.099627809 podStartE2EDuration="4.099627809s" podCreationTimestamp="2026-01-21 15:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:25.098118311 +0000 UTC m=+927.459560720" watchObservedRunningTime="2026-01-21 15:47:25.099627809 +0000 UTC m=+927.461070218" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.116329 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq2t7"] Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.119880 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq2t7"] Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.123857 4890 scope.go:117] "RemoveContainer" containerID="82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.139973 4890 scope.go:117] "RemoveContainer" containerID="816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555" Jan 21 15:47:25 crc kubenswrapper[4890]: E0121 15:47:25.140660 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555\": container with ID starting with 816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555 not found: ID does not exist" containerID="816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.140796 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555"} err="failed to get container status \"816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555\": rpc error: code = NotFound desc = could not find container \"816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555\": container with ID starting with 816b771e5643657f5468391df9f2a6980d2a1873009bdfb91f8747278f856555 not found: ID does not exist" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.140928 4890 scope.go:117] "RemoveContainer" containerID="6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd" Jan 21 15:47:25 crc kubenswrapper[4890]: E0121 15:47:25.142553 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd\": container with ID starting with 6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd not found: ID does not exist" containerID="6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.142585 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd"} err="failed to get container status \"6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd\": rpc error: code = NotFound desc = could not find container \"6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd\": container with ID starting with 6621d3c37139e593c84210e45cac8890ac11634ca134c7ebbd131153d0e872dd not found: ID does not exist" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.142615 4890 scope.go:117] "RemoveContainer" containerID="82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c" Jan 21 15:47:25 crc kubenswrapper[4890]: E0121 15:47:25.142948 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c\": container with ID starting with 82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c not found: ID does not exist" containerID="82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.143056 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c"} err="failed to get container status \"82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c\": rpc error: code = NotFound desc = could not find container \"82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c\": container with ID starting with 82f551a7423cde9be360ba3f5663fec6b7ca9df348dc14d901b1be3ee3f1fd2c not found: ID does not exist" Jan 21 15:47:25 crc kubenswrapper[4890]: I0121 15:47:25.925747 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" path="/var/lib/kubelet/pods/b490b293-5c64-4424-bfca-4fb8ba23ef0f/volumes" Jan 21 15:47:26 crc kubenswrapper[4890]: I0121 15:47:26.080125 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-f25hv" Jan 21 15:47:31 crc kubenswrapper[4890]: I0121 15:47:31.118605 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" event={"ID":"7c1ba220-b8cb-40b8-ae09-e94423a126a5","Type":"ContainerStarted","Data":"8a81851b238e0b83e069cc23461c4752b7e91130755993d5d3ad8eee1425fe49"} Jan 21 15:47:31 crc kubenswrapper[4890]: I0121 15:47:31.119236 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:31 crc kubenswrapper[4890]: I0121 15:47:31.121059 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1f32d47-a574-442b-be16-9cdb86b30aa8" containerID="c170c8ba51420380f58a3b5fc57378a17fb59f62e040261f78cc786a62f27bec" exitCode=0 Jan 21 15:47:31 crc kubenswrapper[4890]: I0121 15:47:31.121095 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerDied","Data":"c170c8ba51420380f58a3b5fc57378a17fb59f62e040261f78cc786a62f27bec"} Jan 21 15:47:31 crc kubenswrapper[4890]: I0121 15:47:31.137053 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" podStartSLOduration=1.9384246200000002 podStartE2EDuration="10.13703468s" podCreationTimestamp="2026-01-21 15:47:21 +0000 UTC" firstStartedPulling="2026-01-21 15:47:22.656515468 +0000 UTC m=+925.017957877" lastFinishedPulling="2026-01-21 15:47:30.855125508 +0000 UTC m=+933.216567937" observedRunningTime="2026-01-21 15:47:31.134534568 +0000 UTC m=+933.495976977" watchObservedRunningTime="2026-01-21 15:47:31.13703468 +0000 UTC m=+933.498477089" Jan 21 15:47:32 crc kubenswrapper[4890]: I0121 15:47:32.129393 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1f32d47-a574-442b-be16-9cdb86b30aa8" containerID="563ec494976380adc950f035dcdea2ad8b087303fe2e1da36d1ebe1d36794e07" exitCode=0 Jan 21 15:47:32 crc kubenswrapper[4890]: I0121 15:47:32.129479 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerDied","Data":"563ec494976380adc950f035dcdea2ad8b087303fe2e1da36d1ebe1d36794e07"} Jan 21 15:47:33 crc kubenswrapper[4890]: I0121 15:47:33.138476 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1f32d47-a574-442b-be16-9cdb86b30aa8" containerID="25e4a40d6345d6dea1c5af83f46e0bc7dd4b150a51e90154f732ed5a3e77f8fa" exitCode=0 Jan 21 15:47:33 crc kubenswrapper[4890]: I0121 15:47:33.138541 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerDied","Data":"25e4a40d6345d6dea1c5af83f46e0bc7dd4b150a51e90154f732ed5a3e77f8fa"} Jan 21 15:47:33 crc kubenswrapper[4890]: I0121 15:47:33.758249 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-f25hv" Jan 21 15:47:34 crc kubenswrapper[4890]: I0121 15:47:34.154141 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerStarted","Data":"6062cd22a0c5f99eb8bc3d2aa18a825d8de5141183f019eb02310b7392ee3b9c"} Jan 21 15:47:34 crc kubenswrapper[4890]: I0121 15:47:34.154218 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerStarted","Data":"e06a51c87b9abf6c234569de4889228ed8b6044abec9cf812efb1f1351afdef7"} Jan 21 15:47:34 crc kubenswrapper[4890]: I0121 15:47:34.154233 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerStarted","Data":"ceebbe01618285968f147ce4982c12428d899b0000a86578aa44d9870e58c5c4"} Jan 21 15:47:34 crc kubenswrapper[4890]: I0121 15:47:34.154245 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerStarted","Data":"29f2ff87c4cd268ba2f9816c2c4bfbd3a3f2424de5b5340844160ee2dc2fb86f"} Jan 21 15:47:34 crc kubenswrapper[4890]: I0121 15:47:34.154256 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerStarted","Data":"ec1217efd51ba533a08c6bfb899222d9a8e5890dbd28ffc3f117b82655043ebf"} Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.175098 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v7hrv" event={"ID":"d1f32d47-a574-442b-be16-9cdb86b30aa8","Type":"ContainerStarted","Data":"b2e516303a30ac815f290f9cf524ab7266fa886435ad36a4c8baa34b680c2304"} Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.176473 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.200171 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-v7hrv" podStartSLOduration=6.469264928 podStartE2EDuration="14.200149893s" podCreationTimestamp="2026-01-21 15:47:21 +0000 UTC" firstStartedPulling="2026-01-21 15:47:23.144878494 +0000 UTC m=+925.506320913" lastFinishedPulling="2026-01-21 15:47:30.875763469 +0000 UTC m=+933.237205878" observedRunningTime="2026-01-21 15:47:35.198170794 +0000 UTC m=+937.559613203" watchObservedRunningTime="2026-01-21 15:47:35.200149893 +0000 UTC m=+937.561592302" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.226024 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n"] Jan 21 15:47:35 crc kubenswrapper[4890]: E0121 15:47:35.226378 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="registry-server" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.226398 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="registry-server" Jan 21 15:47:35 crc kubenswrapper[4890]: E0121 15:47:35.226411 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="extract-utilities" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.226419 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="extract-utilities" Jan 21 15:47:35 crc kubenswrapper[4890]: E0121 15:47:35.226437 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="extract-content" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.226443 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="extract-content" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.226627 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="b490b293-5c64-4424-bfca-4fb8ba23ef0f" containerName="registry-server" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.227963 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.230609 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.239234 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n"] Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.270726 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.270832 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.270871 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvktc\" (UniqueName: \"kubernetes.io/projected/8a88852c-2a61-4ecb-abb0-5679e06d7c39-kube-api-access-jvktc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.371892 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.371978 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.372015 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvktc\" (UniqueName: \"kubernetes.io/projected/8a88852c-2a61-4ecb-abb0-5679e06d7c39-kube-api-access-jvktc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.372451 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.372609 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.391895 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvktc\" (UniqueName: \"kubernetes.io/projected/8a88852c-2a61-4ecb-abb0-5679e06d7c39-kube-api-access-jvktc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.547491 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:35 crc kubenswrapper[4890]: I0121 15:47:35.985105 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n"] Jan 21 15:47:36 crc kubenswrapper[4890]: W0121 15:47:35.992943 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a88852c_2a61_4ecb_abb0_5679e06d7c39.slice/crio-9f6dfdad515262e0032cd4576a0cab44626b1ef9bfb14a076e19c938f6a6a314 WatchSource:0}: Error finding container 9f6dfdad515262e0032cd4576a0cab44626b1ef9bfb14a076e19c938f6a6a314: Status 404 returned error can't find the container with id 9f6dfdad515262e0032cd4576a0cab44626b1ef9bfb14a076e19c938f6a6a314 Jan 21 15:47:36 crc kubenswrapper[4890]: I0121 15:47:36.181894 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" event={"ID":"8a88852c-2a61-4ecb-abb0-5679e06d7c39","Type":"ContainerStarted","Data":"9f6dfdad515262e0032cd4576a0cab44626b1ef9bfb14a076e19c938f6a6a314"} Jan 21 15:47:37 crc kubenswrapper[4890]: I0121 15:47:37.190760 4890 generic.go:334] "Generic (PLEG): container finished" podID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerID="b68a9e1b299ff6ff6447e145eb79a4a06cb2d6e672e8892db12912c766a13150" exitCode=0 Jan 21 15:47:37 crc kubenswrapper[4890]: I0121 15:47:37.190844 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" event={"ID":"8a88852c-2a61-4ecb-abb0-5679e06d7c39","Type":"ContainerDied","Data":"b68a9e1b299ff6ff6447e145eb79a4a06cb2d6e672e8892db12912c766a13150"} Jan 21 15:47:37 crc kubenswrapper[4890]: I0121 15:47:37.784420 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:37 crc kubenswrapper[4890]: I0121 15:47:37.831191 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:42 crc kubenswrapper[4890]: I0121 15:47:42.260274 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-z6xnk" Jan 21 15:47:42 crc kubenswrapper[4890]: I0121 15:47:42.477268 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dqvjk" Jan 21 15:47:43 crc kubenswrapper[4890]: I0121 15:47:43.227931 4890 generic.go:334] "Generic (PLEG): container finished" podID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerID="12028206d42677db8722e6c46e257f42ffce0ff198f987735aa2e494092256e8" exitCode=0 Jan 21 15:47:43 crc kubenswrapper[4890]: I0121 15:47:43.228042 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" event={"ID":"8a88852c-2a61-4ecb-abb0-5679e06d7c39","Type":"ContainerDied","Data":"12028206d42677db8722e6c46e257f42ffce0ff198f987735aa2e494092256e8"} Jan 21 15:47:44 crc kubenswrapper[4890]: I0121 15:47:44.236831 4890 generic.go:334] "Generic (PLEG): container finished" podID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerID="2743095b915c8496ba51f43d731e5f83dc04867362b7462c9b958b84a417f54c" exitCode=0 Jan 21 15:47:44 crc kubenswrapper[4890]: I0121 15:47:44.236934 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" event={"ID":"8a88852c-2a61-4ecb-abb0-5679e06d7c39","Type":"ContainerDied","Data":"2743095b915c8496ba51f43d731e5f83dc04867362b7462c9b958b84a417f54c"} Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.470953 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.535628 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvktc\" (UniqueName: \"kubernetes.io/projected/8a88852c-2a61-4ecb-abb0-5679e06d7c39-kube-api-access-jvktc\") pod \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.535690 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-bundle\") pod \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.535723 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-util\") pod \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\" (UID: \"8a88852c-2a61-4ecb-abb0-5679e06d7c39\") " Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.536595 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-bundle" (OuterVolumeSpecName: "bundle") pod "8a88852c-2a61-4ecb-abb0-5679e06d7c39" (UID: "8a88852c-2a61-4ecb-abb0-5679e06d7c39"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.540485 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a88852c-2a61-4ecb-abb0-5679e06d7c39-kube-api-access-jvktc" (OuterVolumeSpecName: "kube-api-access-jvktc") pod "8a88852c-2a61-4ecb-abb0-5679e06d7c39" (UID: "8a88852c-2a61-4ecb-abb0-5679e06d7c39"). InnerVolumeSpecName "kube-api-access-jvktc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.545094 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-util" (OuterVolumeSpecName: "util") pod "8a88852c-2a61-4ecb-abb0-5679e06d7c39" (UID: "8a88852c-2a61-4ecb-abb0-5679e06d7c39"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.637228 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvktc\" (UniqueName: \"kubernetes.io/projected/8a88852c-2a61-4ecb-abb0-5679e06d7c39-kube-api-access-jvktc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.637639 4890 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:45 crc kubenswrapper[4890]: I0121 15:47:45.637663 4890 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a88852c-2a61-4ecb-abb0-5679e06d7c39-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4890]: I0121 15:47:46.253479 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" event={"ID":"8a88852c-2a61-4ecb-abb0-5679e06d7c39","Type":"ContainerDied","Data":"9f6dfdad515262e0032cd4576a0cab44626b1ef9bfb14a076e19c938f6a6a314"} Jan 21 15:47:46 crc kubenswrapper[4890]: I0121 15:47:46.253533 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f6dfdad515262e0032cd4576a0cab44626b1ef9bfb14a076e19c938f6a6a314" Jan 21 15:47:46 crc kubenswrapper[4890]: I0121 15:47:46.253532 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n" Jan 21 15:47:52 crc kubenswrapper[4890]: I0121 15:47:52.786845 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-v7hrv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.058092 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv"] Jan 21 15:47:53 crc kubenswrapper[4890]: E0121 15:47:53.058338 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerName="util" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.058366 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerName="util" Jan 21 15:47:53 crc kubenswrapper[4890]: E0121 15:47:53.058380 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerName="extract" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.058387 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerName="extract" Jan 21 15:47:53 crc kubenswrapper[4890]: E0121 15:47:53.058394 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerName="pull" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.058402 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerName="pull" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.058519 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a88852c-2a61-4ecb-abb0-5679e06d7c39" containerName="extract" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.058911 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.060876 4890 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-frz46" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.061673 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.062475 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.078415 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv"] Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.131121 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d9c1fa9-3f27-49c9-b6a2-05ac868eddec-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ghqsv\" (UID: \"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.131190 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2z2\" (UniqueName: \"kubernetes.io/projected/8d9c1fa9-3f27-49c9-b6a2-05ac868eddec-kube-api-access-9x2z2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ghqsv\" (UID: \"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.232883 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d9c1fa9-3f27-49c9-b6a2-05ac868eddec-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ghqsv\" (UID: \"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.232929 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2z2\" (UniqueName: \"kubernetes.io/projected/8d9c1fa9-3f27-49c9-b6a2-05ac868eddec-kube-api-access-9x2z2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ghqsv\" (UID: \"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.233556 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d9c1fa9-3f27-49c9-b6a2-05ac868eddec-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ghqsv\" (UID: \"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.253752 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2z2\" (UniqueName: \"kubernetes.io/projected/8d9c1fa9-3f27-49c9-b6a2-05ac868eddec-kube-api-access-9x2z2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ghqsv\" (UID: \"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.377567 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" Jan 21 15:47:53 crc kubenswrapper[4890]: I0121 15:47:53.837031 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv"] Jan 21 15:47:53 crc kubenswrapper[4890]: W0121 15:47:53.849532 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9c1fa9_3f27_49c9_b6a2_05ac868eddec.slice/crio-1b8b76f76b8c55b8b238a3ea3f6a957507d5903c202c44dcf28557440748546c WatchSource:0}: Error finding container 1b8b76f76b8c55b8b238a3ea3f6a957507d5903c202c44dcf28557440748546c: Status 404 returned error can't find the container with id 1b8b76f76b8c55b8b238a3ea3f6a957507d5903c202c44dcf28557440748546c Jan 21 15:47:54 crc kubenswrapper[4890]: I0121 15:47:54.295614 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" event={"ID":"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec","Type":"ContainerStarted","Data":"1b8b76f76b8c55b8b238a3ea3f6a957507d5903c202c44dcf28557440748546c"} Jan 21 15:48:05 crc kubenswrapper[4890]: I0121 15:48:05.368388 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" event={"ID":"8d9c1fa9-3f27-49c9-b6a2-05ac868eddec","Type":"ContainerStarted","Data":"0ae74bba2af8f5e8013e22cac2e78ee4f36f1968cdd9b8f3f1e64d16ad59a178"} Jan 21 15:48:05 crc kubenswrapper[4890]: I0121 15:48:05.388108 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ghqsv" podStartSLOduration=2.059782819 podStartE2EDuration="12.388083954s" podCreationTimestamp="2026-01-21 15:47:53 +0000 UTC" firstStartedPulling="2026-01-21 15:47:53.852888647 +0000 UTC m=+956.214331076" lastFinishedPulling="2026-01-21 15:48:04.181189812 +0000 UTC m=+966.542632211" observedRunningTime="2026-01-21 15:48:05.385030158 +0000 UTC m=+967.746472577" watchObservedRunningTime="2026-01-21 15:48:05.388083954 +0000 UTC m=+967.749526383" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.390628 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7jls9"] Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.391618 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.394033 4890 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fjm6d" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.394536 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.398900 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.407695 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7jls9"] Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.559297 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstvb\" (UniqueName: \"kubernetes.io/projected/5b1f0ea1-055b-4788-82b4-ac0a20174220-kube-api-access-pstvb\") pod \"cert-manager-webhook-f4fb5df64-7jls9\" (UID: \"5b1f0ea1-055b-4788-82b4-ac0a20174220\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.559537 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b1f0ea1-055b-4788-82b4-ac0a20174220-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7jls9\" (UID: \"5b1f0ea1-055b-4788-82b4-ac0a20174220\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.660879 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pstvb\" (UniqueName: \"kubernetes.io/projected/5b1f0ea1-055b-4788-82b4-ac0a20174220-kube-api-access-pstvb\") pod \"cert-manager-webhook-f4fb5df64-7jls9\" (UID: \"5b1f0ea1-055b-4788-82b4-ac0a20174220\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.660973 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b1f0ea1-055b-4788-82b4-ac0a20174220-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7jls9\" (UID: \"5b1f0ea1-055b-4788-82b4-ac0a20174220\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.688328 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pstvb\" (UniqueName: \"kubernetes.io/projected/5b1f0ea1-055b-4788-82b4-ac0a20174220-kube-api-access-pstvb\") pod \"cert-manager-webhook-f4fb5df64-7jls9\" (UID: \"5b1f0ea1-055b-4788-82b4-ac0a20174220\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.690985 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b1f0ea1-055b-4788-82b4-ac0a20174220-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7jls9\" (UID: \"5b1f0ea1-055b-4788-82b4-ac0a20174220\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:07 crc kubenswrapper[4890]: I0121 15:48:07.709457 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:08 crc kubenswrapper[4890]: I0121 15:48:08.152676 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7jls9"] Jan 21 15:48:08 crc kubenswrapper[4890]: I0121 15:48:08.386312 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" event={"ID":"5b1f0ea1-055b-4788-82b4-ac0a20174220","Type":"ContainerStarted","Data":"0192b17a571074d913b40058b0e5b2a138e90c540e8aa56122a3396a526a87c7"} Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.185780 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nrrwk"] Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.186952 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.196970 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrrwk"] Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.388960 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-catalog-content\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.389008 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-utilities\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.389057 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-587dg\" (UniqueName: \"kubernetes.io/projected/36e5c26c-d0f5-4029-bd58-54648356a0a0-kube-api-access-587dg\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.490990 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-catalog-content\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.491221 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-utilities\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.491344 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-587dg\" (UniqueName: \"kubernetes.io/projected/36e5c26c-d0f5-4029-bd58-54648356a0a0-kube-api-access-587dg\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.491671 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-catalog-content\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.491701 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-utilities\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.530683 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-587dg\" (UniqueName: \"kubernetes.io/projected/36e5c26c-d0f5-4029-bd58-54648356a0a0-kube-api-access-587dg\") pod \"certified-operators-nrrwk\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:09 crc kubenswrapper[4890]: I0121 15:48:09.808712 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.336744 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrrwk"] Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.563064 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k"] Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.564413 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.567302 4890 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-65k6c" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.579173 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k"] Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.711828 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84fll\" (UniqueName: \"kubernetes.io/projected/2266e2cc-a129-4ec1-af61-8fc445b56deb-kube-api-access-84fll\") pod \"cert-manager-cainjector-855d9ccff4-nmk9k\" (UID: \"2266e2cc-a129-4ec1-af61-8fc445b56deb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.711885 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2266e2cc-a129-4ec1-af61-8fc445b56deb-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-nmk9k\" (UID: \"2266e2cc-a129-4ec1-af61-8fc445b56deb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.813847 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84fll\" (UniqueName: \"kubernetes.io/projected/2266e2cc-a129-4ec1-af61-8fc445b56deb-kube-api-access-84fll\") pod \"cert-manager-cainjector-855d9ccff4-nmk9k\" (UID: \"2266e2cc-a129-4ec1-af61-8fc445b56deb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.814321 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2266e2cc-a129-4ec1-af61-8fc445b56deb-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-nmk9k\" (UID: \"2266e2cc-a129-4ec1-af61-8fc445b56deb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.837335 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2266e2cc-a129-4ec1-af61-8fc445b56deb-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-nmk9k\" (UID: \"2266e2cc-a129-4ec1-af61-8fc445b56deb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.837935 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84fll\" (UniqueName: \"kubernetes.io/projected/2266e2cc-a129-4ec1-af61-8fc445b56deb-kube-api-access-84fll\") pod \"cert-manager-cainjector-855d9ccff4-nmk9k\" (UID: \"2266e2cc-a129-4ec1-af61-8fc445b56deb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:10 crc kubenswrapper[4890]: I0121 15:48:10.889101 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" Jan 21 15:48:11 crc kubenswrapper[4890]: I0121 15:48:11.137325 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k"] Jan 21 15:48:11 crc kubenswrapper[4890]: I0121 15:48:11.417505 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" event={"ID":"2266e2cc-a129-4ec1-af61-8fc445b56deb","Type":"ContainerStarted","Data":"cca39377aa4bb2fc6d254bffb77a1c67f6ced0d5e941810adc294f86f9263831"} Jan 21 15:48:11 crc kubenswrapper[4890]: I0121 15:48:11.420698 4890 generic.go:334] "Generic (PLEG): container finished" podID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerID="5fbd5528164d295990f7e9040f09d338f0f4f5f7003b9ca3ada5f82840b2bf21" exitCode=0 Jan 21 15:48:11 crc kubenswrapper[4890]: I0121 15:48:11.420729 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrrwk" event={"ID":"36e5c26c-d0f5-4029-bd58-54648356a0a0","Type":"ContainerDied","Data":"5fbd5528164d295990f7e9040f09d338f0f4f5f7003b9ca3ada5f82840b2bf21"} Jan 21 15:48:11 crc kubenswrapper[4890]: I0121 15:48:11.420746 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrrwk" event={"ID":"36e5c26c-d0f5-4029-bd58-54648356a0a0","Type":"ContainerStarted","Data":"cb1a087306f0726f35e1d8c46369d828d29820d7e9eed30f49c88b62ecf543fe"} Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.353551 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zwkcg"] Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.357344 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.379775 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwkcg"] Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.385508 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8ds\" (UniqueName: \"kubernetes.io/projected/c0d99eea-54bc-47be-aade-98138fa5d31e-kube-api-access-4w8ds\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.385605 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-catalog-content\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.385717 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-utilities\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.486740 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-catalog-content\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.486876 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-utilities\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.486930 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8ds\" (UniqueName: \"kubernetes.io/projected/c0d99eea-54bc-47be-aade-98138fa5d31e-kube-api-access-4w8ds\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.487839 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-catalog-content\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.488131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-utilities\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.508723 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8ds\" (UniqueName: \"kubernetes.io/projected/c0d99eea-54bc-47be-aade-98138fa5d31e-kube-api-access-4w8ds\") pod \"community-operators-zwkcg\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.678252 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.762147 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.762231 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:48:18 crc kubenswrapper[4890]: I0121 15:48:18.996416 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwkcg"] Jan 21 15:48:19 crc kubenswrapper[4890]: I0121 15:48:19.495745 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwkcg" event={"ID":"c0d99eea-54bc-47be-aade-98138fa5d31e","Type":"ContainerStarted","Data":"5c2fa804849c4df2f2ce56bfe962851b86d54b3af010081236ecc6d69c24726f"} Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.503626 4890 generic.go:334] "Generic (PLEG): container finished" podID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerID="8abc595ecf0515437b9f4a7371d78c793d13e25e613e7cc578a6df72d66a4835" exitCode=0 Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.503715 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwkcg" event={"ID":"c0d99eea-54bc-47be-aade-98138fa5d31e","Type":"ContainerDied","Data":"8abc595ecf0515437b9f4a7371d78c793d13e25e613e7cc578a6df72d66a4835"} Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.505900 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" event={"ID":"2266e2cc-a129-4ec1-af61-8fc445b56deb","Type":"ContainerStarted","Data":"de193b62415ee64e72c90502224186b23859b07959073d70cad809f71b70ad23"} Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.507881 4890 generic.go:334] "Generic (PLEG): container finished" podID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerID="aa3594f439ba0d4e69b1db8c36e9f5c8bd6a6cf7c0945f9e7d70e97662630395" exitCode=0 Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.507925 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrrwk" event={"ID":"36e5c26c-d0f5-4029-bd58-54648356a0a0","Type":"ContainerDied","Data":"aa3594f439ba0d4e69b1db8c36e9f5c8bd6a6cf7c0945f9e7d70e97662630395"} Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.509810 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" event={"ID":"5b1f0ea1-055b-4788-82b4-ac0a20174220","Type":"ContainerStarted","Data":"6cc3490daa497d5e6b89d170fb68f564cba5c5a2d71a1d47b3305a8772690da3"} Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.510337 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.543810 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" podStartSLOduration=2.277343858 podStartE2EDuration="13.543791485s" podCreationTimestamp="2026-01-21 15:48:07 +0000 UTC" firstStartedPulling="2026-01-21 15:48:08.160200752 +0000 UTC m=+970.521643161" lastFinishedPulling="2026-01-21 15:48:19.426648379 +0000 UTC m=+981.788090788" observedRunningTime="2026-01-21 15:48:20.540873562 +0000 UTC m=+982.902315971" watchObservedRunningTime="2026-01-21 15:48:20.543791485 +0000 UTC m=+982.905233894" Jan 21 15:48:20 crc kubenswrapper[4890]: I0121 15:48:20.562988 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nmk9k" podStartSLOduration=2.295532045 podStartE2EDuration="10.562967443s" podCreationTimestamp="2026-01-21 15:48:10 +0000 UTC" firstStartedPulling="2026-01-21 15:48:11.159209591 +0000 UTC m=+973.520652010" lastFinishedPulling="2026-01-21 15:48:19.426644999 +0000 UTC m=+981.788087408" observedRunningTime="2026-01-21 15:48:20.559277421 +0000 UTC m=+982.920719830" watchObservedRunningTime="2026-01-21 15:48:20.562967443 +0000 UTC m=+982.924409852" Jan 21 15:48:22 crc kubenswrapper[4890]: I0121 15:48:22.524005 4890 generic.go:334] "Generic (PLEG): container finished" podID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerID="bd11ae1e144431668f233cc7f73ecf1070c781bcbf744ea749be3b493f498561" exitCode=0 Jan 21 15:48:22 crc kubenswrapper[4890]: I0121 15:48:22.524079 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwkcg" event={"ID":"c0d99eea-54bc-47be-aade-98138fa5d31e","Type":"ContainerDied","Data":"bd11ae1e144431668f233cc7f73ecf1070c781bcbf744ea749be3b493f498561"} Jan 21 15:48:22 crc kubenswrapper[4890]: I0121 15:48:22.527392 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrrwk" event={"ID":"36e5c26c-d0f5-4029-bd58-54648356a0a0","Type":"ContainerStarted","Data":"655a42a86a481637a76c4ed31fa96b42fc3e711b1e655527c2647693e9eb5a6a"} Jan 21 15:48:22 crc kubenswrapper[4890]: I0121 15:48:22.574500 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nrrwk" podStartSLOduration=3.864157703 podStartE2EDuration="13.574476279s" podCreationTimestamp="2026-01-21 15:48:09 +0000 UTC" firstStartedPulling="2026-01-21 15:48:11.424232905 +0000 UTC m=+973.785675314" lastFinishedPulling="2026-01-21 15:48:21.134551441 +0000 UTC m=+983.495993890" observedRunningTime="2026-01-21 15:48:22.571082185 +0000 UTC m=+984.932524594" watchObservedRunningTime="2026-01-21 15:48:22.574476279 +0000 UTC m=+984.935918698" Jan 21 15:48:25 crc kubenswrapper[4890]: I0121 15:48:25.547075 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwkcg" event={"ID":"c0d99eea-54bc-47be-aade-98138fa5d31e","Type":"ContainerStarted","Data":"f2200242b474f1af1209aaaa517e6ebc831edbae136f6b34d96cbbc308cd88d7"} Jan 21 15:48:25 crc kubenswrapper[4890]: I0121 15:48:25.571739 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zwkcg" podStartSLOduration=3.6550542139999997 podStartE2EDuration="7.571712533s" podCreationTimestamp="2026-01-21 15:48:18 +0000 UTC" firstStartedPulling="2026-01-21 15:48:20.507056558 +0000 UTC m=+982.868498967" lastFinishedPulling="2026-01-21 15:48:24.423714877 +0000 UTC m=+986.785157286" observedRunningTime="2026-01-21 15:48:25.56797492 +0000 UTC m=+987.929417329" watchObservedRunningTime="2026-01-21 15:48:25.571712533 +0000 UTC m=+987.933154942" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.218818 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-r9vwb"] Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.220138 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.222901 4890 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-snncs" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.238549 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-r9vwb"] Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.401780 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ea80f0-a74c-4d15-b17b-642efc0da703-bound-sa-token\") pod \"cert-manager-86cb77c54b-r9vwb\" (UID: \"19ea80f0-a74c-4d15-b17b-642efc0da703\") " pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.401842 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7g8g\" (UniqueName: \"kubernetes.io/projected/19ea80f0-a74c-4d15-b17b-642efc0da703-kube-api-access-j7g8g\") pod \"cert-manager-86cb77c54b-r9vwb\" (UID: \"19ea80f0-a74c-4d15-b17b-642efc0da703\") " pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.503488 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ea80f0-a74c-4d15-b17b-642efc0da703-bound-sa-token\") pod \"cert-manager-86cb77c54b-r9vwb\" (UID: \"19ea80f0-a74c-4d15-b17b-642efc0da703\") " pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.503830 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7g8g\" (UniqueName: \"kubernetes.io/projected/19ea80f0-a74c-4d15-b17b-642efc0da703-kube-api-access-j7g8g\") pod \"cert-manager-86cb77c54b-r9vwb\" (UID: \"19ea80f0-a74c-4d15-b17b-642efc0da703\") " pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.528089 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ea80f0-a74c-4d15-b17b-642efc0da703-bound-sa-token\") pod \"cert-manager-86cb77c54b-r9vwb\" (UID: \"19ea80f0-a74c-4d15-b17b-642efc0da703\") " pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.528271 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7g8g\" (UniqueName: \"kubernetes.io/projected/19ea80f0-a74c-4d15-b17b-642efc0da703-kube-api-access-j7g8g\") pod \"cert-manager-86cb77c54b-r9vwb\" (UID: \"19ea80f0-a74c-4d15-b17b-642efc0da703\") " pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:26 crc kubenswrapper[4890]: I0121 15:48:26.594966 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-r9vwb" Jan 21 15:48:27 crc kubenswrapper[4890]: I0121 15:48:27.053758 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-r9vwb"] Jan 21 15:48:27 crc kubenswrapper[4890]: I0121 15:48:27.557662 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-r9vwb" event={"ID":"19ea80f0-a74c-4d15-b17b-642efc0da703","Type":"ContainerStarted","Data":"fb9ca071aa318dcc50991b5cc9c6a238591ebeb8de3c08fc92f4fdf6646b76dd"} Jan 21 15:48:27 crc kubenswrapper[4890]: I0121 15:48:27.714081 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jls9" Jan 21 15:48:28 crc kubenswrapper[4890]: I0121 15:48:28.678796 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:28 crc kubenswrapper[4890]: I0121 15:48:28.679291 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:28 crc kubenswrapper[4890]: I0121 15:48:28.724560 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:29 crc kubenswrapper[4890]: I0121 15:48:29.610412 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:29 crc kubenswrapper[4890]: I0121 15:48:29.657192 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwkcg"] Jan 21 15:48:29 crc kubenswrapper[4890]: I0121 15:48:29.809529 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:29 crc kubenswrapper[4890]: I0121 15:48:29.809573 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:29 crc kubenswrapper[4890]: I0121 15:48:29.849391 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:30 crc kubenswrapper[4890]: I0121 15:48:30.577284 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-r9vwb" event={"ID":"19ea80f0-a74c-4d15-b17b-642efc0da703","Type":"ContainerStarted","Data":"99fb3bdb8a9b034b80bc2b4f4cfa4a94242f4a479d7e49996e30ba66d9a02d5d"} Jan 21 15:48:30 crc kubenswrapper[4890]: I0121 15:48:30.642877 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:30 crc kubenswrapper[4890]: I0121 15:48:30.666799 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-r9vwb" podStartSLOduration=4.666773846 podStartE2EDuration="4.666773846s" podCreationTimestamp="2026-01-21 15:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:30.605571779 +0000 UTC m=+992.967014228" watchObservedRunningTime="2026-01-21 15:48:30.666773846 +0000 UTC m=+993.028216255" Jan 21 15:48:31 crc kubenswrapper[4890]: I0121 15:48:31.365374 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrrwk"] Jan 21 15:48:31 crc kubenswrapper[4890]: I0121 15:48:31.584439 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zwkcg" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="registry-server" containerID="cri-o://f2200242b474f1af1209aaaa517e6ebc831edbae136f6b34d96cbbc308cd88d7" gracePeriod=2 Jan 21 15:48:32 crc kubenswrapper[4890]: I0121 15:48:32.591704 4890 generic.go:334] "Generic (PLEG): container finished" podID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerID="f2200242b474f1af1209aaaa517e6ebc831edbae136f6b34d96cbbc308cd88d7" exitCode=0 Jan 21 15:48:32 crc kubenswrapper[4890]: I0121 15:48:32.592222 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nrrwk" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="registry-server" containerID="cri-o://655a42a86a481637a76c4ed31fa96b42fc3e711b1e655527c2647693e9eb5a6a" gracePeriod=2 Jan 21 15:48:32 crc kubenswrapper[4890]: I0121 15:48:32.592510 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwkcg" event={"ID":"c0d99eea-54bc-47be-aade-98138fa5d31e","Type":"ContainerDied","Data":"f2200242b474f1af1209aaaa517e6ebc831edbae136f6b34d96cbbc308cd88d7"} Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.202797 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.331318 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-utilities\") pod \"c0d99eea-54bc-47be-aade-98138fa5d31e\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.331384 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-catalog-content\") pod \"c0d99eea-54bc-47be-aade-98138fa5d31e\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.331505 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w8ds\" (UniqueName: \"kubernetes.io/projected/c0d99eea-54bc-47be-aade-98138fa5d31e-kube-api-access-4w8ds\") pod \"c0d99eea-54bc-47be-aade-98138fa5d31e\" (UID: \"c0d99eea-54bc-47be-aade-98138fa5d31e\") " Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.332405 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-utilities" (OuterVolumeSpecName: "utilities") pod "c0d99eea-54bc-47be-aade-98138fa5d31e" (UID: "c0d99eea-54bc-47be-aade-98138fa5d31e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.337674 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d99eea-54bc-47be-aade-98138fa5d31e-kube-api-access-4w8ds" (OuterVolumeSpecName: "kube-api-access-4w8ds") pod "c0d99eea-54bc-47be-aade-98138fa5d31e" (UID: "c0d99eea-54bc-47be-aade-98138fa5d31e"). InnerVolumeSpecName "kube-api-access-4w8ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.387513 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0d99eea-54bc-47be-aade-98138fa5d31e" (UID: "c0d99eea-54bc-47be-aade-98138fa5d31e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.432741 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w8ds\" (UniqueName: \"kubernetes.io/projected/c0d99eea-54bc-47be-aade-98138fa5d31e-kube-api-access-4w8ds\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.432772 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.432782 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d99eea-54bc-47be-aade-98138fa5d31e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.600130 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwkcg" event={"ID":"c0d99eea-54bc-47be-aade-98138fa5d31e","Type":"ContainerDied","Data":"5c2fa804849c4df2f2ce56bfe962851b86d54b3af010081236ecc6d69c24726f"} Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.600198 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwkcg" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.600234 4890 scope.go:117] "RemoveContainer" containerID="f2200242b474f1af1209aaaa517e6ebc831edbae136f6b34d96cbbc308cd88d7" Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.642226 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwkcg"] Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.646485 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zwkcg"] Jan 21 15:48:33 crc kubenswrapper[4890]: I0121 15:48:33.920174 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" path="/var/lib/kubelet/pods/c0d99eea-54bc-47be-aade-98138fa5d31e/volumes" Jan 21 15:48:35 crc kubenswrapper[4890]: I0121 15:48:35.475744 4890 scope.go:117] "RemoveContainer" containerID="bd11ae1e144431668f233cc7f73ecf1070c781bcbf744ea749be3b493f498561" Jan 21 15:48:35 crc kubenswrapper[4890]: I0121 15:48:35.517670 4890 scope.go:117] "RemoveContainer" containerID="8abc595ecf0515437b9f4a7371d78c793d13e25e613e7cc578a6df72d66a4835" Jan 21 15:48:36 crc kubenswrapper[4890]: I0121 15:48:36.627540 4890 generic.go:334] "Generic (PLEG): container finished" podID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerID="655a42a86a481637a76c4ed31fa96b42fc3e711b1e655527c2647693e9eb5a6a" exitCode=0 Jan 21 15:48:36 crc kubenswrapper[4890]: I0121 15:48:36.627606 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrrwk" event={"ID":"36e5c26c-d0f5-4029-bd58-54648356a0a0","Type":"ContainerDied","Data":"655a42a86a481637a76c4ed31fa96b42fc3e711b1e655527c2647693e9eb5a6a"} Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.376499 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.397632 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-utilities\") pod \"36e5c26c-d0f5-4029-bd58-54648356a0a0\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.397680 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-587dg\" (UniqueName: \"kubernetes.io/projected/36e5c26c-d0f5-4029-bd58-54648356a0a0-kube-api-access-587dg\") pod \"36e5c26c-d0f5-4029-bd58-54648356a0a0\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.397801 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-catalog-content\") pod \"36e5c26c-d0f5-4029-bd58-54648356a0a0\" (UID: \"36e5c26c-d0f5-4029-bd58-54648356a0a0\") " Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.400124 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-utilities" (OuterVolumeSpecName: "utilities") pod "36e5c26c-d0f5-4029-bd58-54648356a0a0" (UID: "36e5c26c-d0f5-4029-bd58-54648356a0a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.436596 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e5c26c-d0f5-4029-bd58-54648356a0a0-kube-api-access-587dg" (OuterVolumeSpecName: "kube-api-access-587dg") pod "36e5c26c-d0f5-4029-bd58-54648356a0a0" (UID: "36e5c26c-d0f5-4029-bd58-54648356a0a0"). InnerVolumeSpecName "kube-api-access-587dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.439854 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36e5c26c-d0f5-4029-bd58-54648356a0a0" (UID: "36e5c26c-d0f5-4029-bd58-54648356a0a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.499160 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.499224 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36e5c26c-d0f5-4029-bd58-54648356a0a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.499245 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-587dg\" (UniqueName: \"kubernetes.io/projected/36e5c26c-d0f5-4029-bd58-54648356a0a0-kube-api-access-587dg\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.637068 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrrwk" event={"ID":"36e5c26c-d0f5-4029-bd58-54648356a0a0","Type":"ContainerDied","Data":"cb1a087306f0726f35e1d8c46369d828d29820d7e9eed30f49c88b62ecf543fe"} Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.637120 4890 scope.go:117] "RemoveContainer" containerID="655a42a86a481637a76c4ed31fa96b42fc3e711b1e655527c2647693e9eb5a6a" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.637130 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrrwk" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.655068 4890 scope.go:117] "RemoveContainer" containerID="aa3594f439ba0d4e69b1db8c36e9f5c8bd6a6cf7c0945f9e7d70e97662630395" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.674549 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrrwk"] Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.675071 4890 scope.go:117] "RemoveContainer" containerID="5fbd5528164d295990f7e9040f09d338f0f4f5f7003b9ca3ada5f82840b2bf21" Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.678737 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nrrwk"] Jan 21 15:48:37 crc kubenswrapper[4890]: I0121 15:48:37.924279 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" path="/var/lib/kubelet/pods/36e5c26c-d0f5-4029-bd58-54648356a0a0/volumes" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.580772 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4dn5g"] Jan 21 15:48:38 crc kubenswrapper[4890]: E0121 15:48:38.581054 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="registry-server" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581070 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="registry-server" Jan 21 15:48:38 crc kubenswrapper[4890]: E0121 15:48:38.581107 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="extract-content" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581116 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="extract-content" Jan 21 15:48:38 crc kubenswrapper[4890]: E0121 15:48:38.581127 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="extract-utilities" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581138 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="extract-utilities" Jan 21 15:48:38 crc kubenswrapper[4890]: E0121 15:48:38.581148 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="extract-content" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581155 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="extract-content" Jan 21 15:48:38 crc kubenswrapper[4890]: E0121 15:48:38.581166 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="registry-server" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581174 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="registry-server" Jan 21 15:48:38 crc kubenswrapper[4890]: E0121 15:48:38.581185 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="extract-utilities" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581192 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="extract-utilities" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581342 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d99eea-54bc-47be-aade-98138fa5d31e" containerName="registry-server" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.581383 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e5c26c-d0f5-4029-bd58-54648356a0a0" containerName="registry-server" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.582019 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.584221 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.584463 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-d4fp4" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.587334 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.592989 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4dn5g"] Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.610395 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsbr\" (UniqueName: \"kubernetes.io/projected/2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9-kube-api-access-sxsbr\") pod \"openstack-operator-index-4dn5g\" (UID: \"2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9\") " pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.711922 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsbr\" (UniqueName: \"kubernetes.io/projected/2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9-kube-api-access-sxsbr\") pod \"openstack-operator-index-4dn5g\" (UID: \"2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9\") " pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.757734 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsbr\" (UniqueName: \"kubernetes.io/projected/2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9-kube-api-access-sxsbr\") pod \"openstack-operator-index-4dn5g\" (UID: \"2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9\") " pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:38 crc kubenswrapper[4890]: I0121 15:48:38.915102 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:39 crc kubenswrapper[4890]: I0121 15:48:39.356078 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4dn5g"] Jan 21 15:48:39 crc kubenswrapper[4890]: I0121 15:48:39.655262 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4dn5g" event={"ID":"2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9","Type":"ContainerStarted","Data":"cba446616a2e165ce62908f1f840814820e1b9f8602afd6c0b93800209e09785"} Jan 21 15:48:41 crc kubenswrapper[4890]: I0121 15:48:41.672287 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4dn5g" event={"ID":"2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9","Type":"ContainerStarted","Data":"086301e235b8ca2723decb4ec7a01bbd7b2ab1d8cbd743d0f5393a7b443a024a"} Jan 21 15:48:48 crc kubenswrapper[4890]: I0121 15:48:48.762202 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:48:48 crc kubenswrapper[4890]: I0121 15:48:48.763208 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:48:48 crc kubenswrapper[4890]: I0121 15:48:48.916056 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:48 crc kubenswrapper[4890]: I0121 15:48:48.916122 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:48 crc kubenswrapper[4890]: I0121 15:48:48.946096 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:48 crc kubenswrapper[4890]: I0121 15:48:48.970959 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4dn5g" podStartSLOduration=9.72349715 podStartE2EDuration="10.970940187s" podCreationTimestamp="2026-01-21 15:48:38 +0000 UTC" firstStartedPulling="2026-01-21 15:48:39.372078186 +0000 UTC m=+1001.733520605" lastFinishedPulling="2026-01-21 15:48:40.619521233 +0000 UTC m=+1002.980963642" observedRunningTime="2026-01-21 15:48:41.693440511 +0000 UTC m=+1004.054882920" watchObservedRunningTime="2026-01-21 15:48:48.970940187 +0000 UTC m=+1011.332382606" Jan 21 15:48:49 crc kubenswrapper[4890]: I0121 15:48:49.770064 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4dn5g" Jan 21 15:48:52 crc kubenswrapper[4890]: I0121 15:48:52.821886 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6"] Jan 21 15:48:52 crc kubenswrapper[4890]: I0121 15:48:52.823304 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:52 crc kubenswrapper[4890]: I0121 15:48:52.830677 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7p48g" Jan 21 15:48:52 crc kubenswrapper[4890]: I0121 15:48:52.831397 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6"] Jan 21 15:48:52 crc kubenswrapper[4890]: I0121 15:48:52.922443 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:52 crc kubenswrapper[4890]: I0121 15:48:52.922513 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2v7t\" (UniqueName: \"kubernetes.io/projected/b3a9acd8-9d34-419d-861b-232a4de671b9-kube-api-access-n2v7t\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:52 crc kubenswrapper[4890]: I0121 15:48:52.922585 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.023914 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.024061 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.024123 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2v7t\" (UniqueName: \"kubernetes.io/projected/b3a9acd8-9d34-419d-861b-232a4de671b9-kube-api-access-n2v7t\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.025426 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.025460 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.045861 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2v7t\" (UniqueName: \"kubernetes.io/projected/b3a9acd8-9d34-419d-861b-232a4de671b9-kube-api-access-n2v7t\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.140724 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.575378 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6"] Jan 21 15:48:53 crc kubenswrapper[4890]: I0121 15:48:53.758392 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" event={"ID":"b3a9acd8-9d34-419d-861b-232a4de671b9","Type":"ContainerStarted","Data":"9eb1e2fd129c04cf1f1e3ed8e61c95c4e19bb946966db49c203814ce80f77700"} Jan 21 15:48:54 crc kubenswrapper[4890]: I0121 15:48:54.921678 4890 generic.go:334] "Generic (PLEG): container finished" podID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerID="fa1c5a40b44c827ffcc33765a7f0393d77e4e4080a96ef162a811dc400d25d90" exitCode=0 Jan 21 15:48:54 crc kubenswrapper[4890]: I0121 15:48:54.921741 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" event={"ID":"b3a9acd8-9d34-419d-861b-232a4de671b9","Type":"ContainerDied","Data":"fa1c5a40b44c827ffcc33765a7f0393d77e4e4080a96ef162a811dc400d25d90"} Jan 21 15:48:55 crc kubenswrapper[4890]: I0121 15:48:55.936034 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" event={"ID":"b3a9acd8-9d34-419d-861b-232a4de671b9","Type":"ContainerStarted","Data":"3ebfd48059e9c2f526878ead577d8eeb9d0b3944ec9b26a2869078ad17fb1547"} Jan 21 15:48:56 crc kubenswrapper[4890]: I0121 15:48:56.948110 4890 generic.go:334] "Generic (PLEG): container finished" podID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerID="3ebfd48059e9c2f526878ead577d8eeb9d0b3944ec9b26a2869078ad17fb1547" exitCode=0 Jan 21 15:48:56 crc kubenswrapper[4890]: I0121 15:48:56.948154 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" event={"ID":"b3a9acd8-9d34-419d-861b-232a4de671b9","Type":"ContainerDied","Data":"3ebfd48059e9c2f526878ead577d8eeb9d0b3944ec9b26a2869078ad17fb1547"} Jan 21 15:48:57 crc kubenswrapper[4890]: I0121 15:48:57.957552 4890 generic.go:334] "Generic (PLEG): container finished" podID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerID="1827c9a76baa6119919fbeac02a5224d13473af69f5ae5313fb175951a4c1c7e" exitCode=0 Jan 21 15:48:57 crc kubenswrapper[4890]: I0121 15:48:57.957671 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" event={"ID":"b3a9acd8-9d34-419d-861b-232a4de671b9","Type":"ContainerDied","Data":"1827c9a76baa6119919fbeac02a5224d13473af69f5ae5313fb175951a4c1c7e"} Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.248716 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.335482 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2v7t\" (UniqueName: \"kubernetes.io/projected/b3a9acd8-9d34-419d-861b-232a4de671b9-kube-api-access-n2v7t\") pod \"b3a9acd8-9d34-419d-861b-232a4de671b9\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.335612 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-bundle\") pod \"b3a9acd8-9d34-419d-861b-232a4de671b9\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.335670 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-util\") pod \"b3a9acd8-9d34-419d-861b-232a4de671b9\" (UID: \"b3a9acd8-9d34-419d-861b-232a4de671b9\") " Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.337407 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-bundle" (OuterVolumeSpecName: "bundle") pod "b3a9acd8-9d34-419d-861b-232a4de671b9" (UID: "b3a9acd8-9d34-419d-861b-232a4de671b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.342458 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a9acd8-9d34-419d-861b-232a4de671b9-kube-api-access-n2v7t" (OuterVolumeSpecName: "kube-api-access-n2v7t") pod "b3a9acd8-9d34-419d-861b-232a4de671b9" (UID: "b3a9acd8-9d34-419d-861b-232a4de671b9"). InnerVolumeSpecName "kube-api-access-n2v7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.372916 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-util" (OuterVolumeSpecName: "util") pod "b3a9acd8-9d34-419d-861b-232a4de671b9" (UID: "b3a9acd8-9d34-419d-861b-232a4de671b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.437646 4890 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.437743 4890 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3a9acd8-9d34-419d-861b-232a4de671b9-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.437767 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2v7t\" (UniqueName: \"kubernetes.io/projected/b3a9acd8-9d34-419d-861b-232a4de671b9-kube-api-access-n2v7t\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.976280 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" event={"ID":"b3a9acd8-9d34-419d-861b-232a4de671b9","Type":"ContainerDied","Data":"9eb1e2fd129c04cf1f1e3ed8e61c95c4e19bb946966db49c203814ce80f77700"} Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.976680 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eb1e2fd129c04cf1f1e3ed8e61c95c4e19bb946966db49c203814ce80f77700" Jan 21 15:48:59 crc kubenswrapper[4890]: I0121 15:48:59.976403 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.518754 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd"] Jan 21 15:49:01 crc kubenswrapper[4890]: E0121 15:49:01.519045 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerName="pull" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.519060 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerName="pull" Jan 21 15:49:01 crc kubenswrapper[4890]: E0121 15:49:01.519072 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerName="extract" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.519079 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerName="extract" Jan 21 15:49:01 crc kubenswrapper[4890]: E0121 15:49:01.519097 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerName="util" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.519104 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerName="util" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.519230 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a9acd8-9d34-419d-861b-232a4de671b9" containerName="extract" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.520049 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" Jan 21 15:49:01 crc kubenswrapper[4890]: W0121 15:49:01.521687 4890 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-n4bj6": failed to list *v1.Secret: secrets "openstack-operator-controller-init-dockercfg-n4bj6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 21 15:49:01 crc kubenswrapper[4890]: E0121 15:49:01.521727 4890 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-init-dockercfg-n4bj6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-operator-controller-init-dockercfg-n4bj6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.550671 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd"] Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.671101 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gb8h\" (UniqueName: \"kubernetes.io/projected/e99a0335-92f6-4871-983a-b61c4d78256e-kube-api-access-6gb8h\") pod \"openstack-operator-controller-init-6d4d7d8545-x45bd\" (UID: \"e99a0335-92f6-4871-983a-b61c4d78256e\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.772830 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gb8h\" (UniqueName: \"kubernetes.io/projected/e99a0335-92f6-4871-983a-b61c4d78256e-kube-api-access-6gb8h\") pod \"openstack-operator-controller-init-6d4d7d8545-x45bd\" (UID: \"e99a0335-92f6-4871-983a-b61c4d78256e\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" Jan 21 15:49:01 crc kubenswrapper[4890]: I0121 15:49:01.790467 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gb8h\" (UniqueName: \"kubernetes.io/projected/e99a0335-92f6-4871-983a-b61c4d78256e-kube-api-access-6gb8h\") pod \"openstack-operator-controller-init-6d4d7d8545-x45bd\" (UID: \"e99a0335-92f6-4871-983a-b61c4d78256e\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" Jan 21 15:49:02 crc kubenswrapper[4890]: I0121 15:49:02.836740 4890 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 21 15:49:02 crc kubenswrapper[4890]: I0121 15:49:02.836845 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" Jan 21 15:49:02 crc kubenswrapper[4890]: I0121 15:49:02.848561 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-n4bj6" Jan 21 15:49:03 crc kubenswrapper[4890]: I0121 15:49:03.045651 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd"] Jan 21 15:49:04 crc kubenswrapper[4890]: I0121 15:49:04.023794 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" event={"ID":"e99a0335-92f6-4871-983a-b61c4d78256e","Type":"ContainerStarted","Data":"5c7667db36e2de8f64443adb3fd1829e957e36d2c942d9b7c92871ed16927cf2"} Jan 21 15:49:08 crc kubenswrapper[4890]: I0121 15:49:08.063403 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" event={"ID":"e99a0335-92f6-4871-983a-b61c4d78256e","Type":"ContainerStarted","Data":"6db24db15660e7930bde84ec3c3f769fc8fc949278d54d662e104da9a43f4438"} Jan 21 15:49:08 crc kubenswrapper[4890]: I0121 15:49:08.064013 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" Jan 21 15:49:08 crc kubenswrapper[4890]: I0121 15:49:08.098586 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" podStartSLOduration=2.590623189 podStartE2EDuration="7.098550587s" podCreationTimestamp="2026-01-21 15:49:01 +0000 UTC" firstStartedPulling="2026-01-21 15:49:03.059899281 +0000 UTC m=+1025.421341690" lastFinishedPulling="2026-01-21 15:49:07.567826679 +0000 UTC m=+1029.929269088" observedRunningTime="2026-01-21 15:49:08.093562453 +0000 UTC m=+1030.455004882" watchObservedRunningTime="2026-01-21 15:49:08.098550587 +0000 UTC m=+1030.459993006" Jan 21 15:49:12 crc kubenswrapper[4890]: I0121 15:49:12.838802 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-x45bd" Jan 21 15:49:18 crc kubenswrapper[4890]: I0121 15:49:18.761736 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:18 crc kubenswrapper[4890]: I0121 15:49:18.762570 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:49:18 crc kubenswrapper[4890]: I0121 15:49:18.762635 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:49:18 crc kubenswrapper[4890]: I0121 15:49:18.763241 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b81d20500077e709078904e361919a2211cb0af68d145b245b901c65377ab4de"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:49:18 crc kubenswrapper[4890]: I0121 15:49:18.763317 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://b81d20500077e709078904e361919a2211cb0af68d145b245b901c65377ab4de" gracePeriod=600 Jan 21 15:49:20 crc kubenswrapper[4890]: I0121 15:49:20.162028 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="b81d20500077e709078904e361919a2211cb0af68d145b245b901c65377ab4de" exitCode=0 Jan 21 15:49:20 crc kubenswrapper[4890]: I0121 15:49:20.162100 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"b81d20500077e709078904e361919a2211cb0af68d145b245b901c65377ab4de"} Jan 21 15:49:20 crc kubenswrapper[4890]: I0121 15:49:20.162538 4890 scope.go:117] "RemoveContainer" containerID="15c7eb35f58f393a9ceb7bc41b4e4e73eaeaf05b996fe213d725df9631b7a811" Jan 21 15:49:21 crc kubenswrapper[4890]: I0121 15:49:21.172380 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"d0a634f6e929f7ffc1800d062d4e30092fbcb2b4f2a695698fc22410e40c8906"} Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.207279 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.209759 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.210885 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.211656 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.213774 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lwhqs" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.213973 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5dssp" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.230306 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.237366 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.240989 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.242036 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.244675 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h6r95"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.245493 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.246099 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlsp7\" (UniqueName: \"kubernetes.io/projected/2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538-kube-api-access-mlsp7\") pod \"barbican-operator-controller-manager-7ddb5c749-f92ld\" (UID: \"2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.246204 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp586\" (UniqueName: \"kubernetes.io/projected/0f4bb54d-23a1-4b41-995f-d7affd9cd504-kube-api-access-zp586\") pod \"cinder-operator-controller-manager-9b68f5989-5c7zf\" (UID: \"0f4bb54d-23a1-4b41-995f-d7affd9cd504\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.249116 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lkctb" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.249638 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.252866 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5d2rg" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.260861 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.266575 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.279286 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mth8h" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.279464 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h6r95"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.288925 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.299530 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.300636 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.304903 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lgk8d" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.319614 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.344422 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.345252 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.347290 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp586\" (UniqueName: \"kubernetes.io/projected/0f4bb54d-23a1-4b41-995f-d7affd9cd504-kube-api-access-zp586\") pod \"cinder-operator-controller-manager-9b68f5989-5c7zf\" (UID: \"0f4bb54d-23a1-4b41-995f-d7affd9cd504\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.347338 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8f4x\" (UniqueName: \"kubernetes.io/projected/91df512f-6657-44f1-b643-c18778e5d159-kube-api-access-n8f4x\") pod \"horizon-operator-controller-manager-77d5c5b54f-pqzjj\" (UID: \"91df512f-6657-44f1-b643-c18778e5d159\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.347411 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h659r\" (UniqueName: \"kubernetes.io/projected/44fcf69f-7131-43c4-9303-f5636c294644-kube-api-access-h659r\") pod \"glance-operator-controller-manager-c6994669c-h6r95\" (UID: \"44fcf69f-7131-43c4-9303-f5636c294644\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.347434 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlsp7\" (UniqueName: \"kubernetes.io/projected/2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538-kube-api-access-mlsp7\") pod \"barbican-operator-controller-manager-7ddb5c749-f92ld\" (UID: \"2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.347454 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6kl\" (UniqueName: \"kubernetes.io/projected/4319998f-d413-4412-bffd-7123d46bce19-kube-api-access-2f6kl\") pod \"designate-operator-controller-manager-9f958b845-l7ndf\" (UID: \"4319998f-d413-4412-bffd-7123d46bce19\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.347484 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jt7g\" (UniqueName: \"kubernetes.io/projected/8791802a-0f5d-4d66-a19b-bf2b373ddd56-kube-api-access-7jt7g\") pod \"heat-operator-controller-manager-594c8c9d5d-v7zt4\" (UID: \"8791802a-0f5d-4d66-a19b-bf2b373ddd56\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.362801 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gv92d" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.367416 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.368501 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.369996 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.373321 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4rk6r" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.377310 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.392771 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.404371 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlsp7\" (UniqueName: \"kubernetes.io/projected/2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538-kube-api-access-mlsp7\") pod \"barbican-operator-controller-manager-7ddb5c749-f92ld\" (UID: \"2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.413738 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp586\" (UniqueName: \"kubernetes.io/projected/0f4bb54d-23a1-4b41-995f-d7affd9cd504-kube-api-access-zp586\") pod \"cinder-operator-controller-manager-9b68f5989-5c7zf\" (UID: \"0f4bb54d-23a1-4b41-995f-d7affd9cd504\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.435558 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.436565 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.443774 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mwm98" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.450307 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl2lb\" (UniqueName: \"kubernetes.io/projected/6dd75ffe-4c90-493d-b5af-313056532562-kube-api-access-sl2lb\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.450365 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwwl\" (UniqueName: \"kubernetes.io/projected/41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14-kube-api-access-9pwwl\") pod \"ironic-operator-controller-manager-78757b4889-xvfb4\" (UID: \"41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.450493 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h659r\" (UniqueName: \"kubernetes.io/projected/44fcf69f-7131-43c4-9303-f5636c294644-kube-api-access-h659r\") pod \"glance-operator-controller-manager-c6994669c-h6r95\" (UID: \"44fcf69f-7131-43c4-9303-f5636c294644\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.450773 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f6kl\" (UniqueName: \"kubernetes.io/projected/4319998f-d413-4412-bffd-7123d46bce19-kube-api-access-2f6kl\") pod \"designate-operator-controller-manager-9f958b845-l7ndf\" (UID: \"4319998f-d413-4412-bffd-7123d46bce19\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.450936 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jt7g\" (UniqueName: \"kubernetes.io/projected/8791802a-0f5d-4d66-a19b-bf2b373ddd56-kube-api-access-7jt7g\") pod \"heat-operator-controller-manager-594c8c9d5d-v7zt4\" (UID: \"8791802a-0f5d-4d66-a19b-bf2b373ddd56\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.451235 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8f4x\" (UniqueName: \"kubernetes.io/projected/91df512f-6657-44f1-b643-c18778e5d159-kube-api-access-n8f4x\") pod \"horizon-operator-controller-manager-77d5c5b54f-pqzjj\" (UID: \"91df512f-6657-44f1-b643-c18778e5d159\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.451283 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2r25\" (UniqueName: \"kubernetes.io/projected/d1fd4cb9-f562-48b3-b829-55f48fc8a414-kube-api-access-l2r25\") pod \"keystone-operator-controller-manager-767fdc4f47-rklqr\" (UID: \"d1fd4cb9-f562-48b3-b829-55f48fc8a414\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.451321 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.479434 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.481247 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.482243 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.494483 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.495947 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.497890 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-djhws" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.503867 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tbmbj" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.516868 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.529833 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jt7g\" (UniqueName: \"kubernetes.io/projected/8791802a-0f5d-4d66-a19b-bf2b373ddd56-kube-api-access-7jt7g\") pod \"heat-operator-controller-manager-594c8c9d5d-v7zt4\" (UID: \"8791802a-0f5d-4d66-a19b-bf2b373ddd56\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.539680 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.543154 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8f4x\" (UniqueName: \"kubernetes.io/projected/91df512f-6657-44f1-b643-c18778e5d159-kube-api-access-n8f4x\") pod \"horizon-operator-controller-manager-77d5c5b54f-pqzjj\" (UID: \"91df512f-6657-44f1-b643-c18778e5d159\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.544947 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f6kl\" (UniqueName: \"kubernetes.io/projected/4319998f-d413-4412-bffd-7123d46bce19-kube-api-access-2f6kl\") pod \"designate-operator-controller-manager-9f958b845-l7ndf\" (UID: \"4319998f-d413-4412-bffd-7123d46bce19\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.547550 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h659r\" (UniqueName: \"kubernetes.io/projected/44fcf69f-7131-43c4-9303-f5636c294644-kube-api-access-h659r\") pod \"glance-operator-controller-manager-c6994669c-h6r95\" (UID: \"44fcf69f-7131-43c4-9303-f5636c294644\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.550687 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.553037 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2ld\" (UniqueName: \"kubernetes.io/projected/ab7d4301-6caa-4a1e-a634-e4a355271b68-kube-api-access-2h2ld\") pod \"manila-operator-controller-manager-864f6b75bf-jcfgv\" (UID: \"ab7d4301-6caa-4a1e-a634-e4a355271b68\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.553086 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfh5s\" (UniqueName: \"kubernetes.io/projected/88bf9325-183a-4b37-8278-fdf6a95edf3c-kube-api-access-hfh5s\") pod \"mariadb-operator-controller-manager-c87fff755-nfvzq\" (UID: \"88bf9325-183a-4b37-8278-fdf6a95edf3c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.553130 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2r25\" (UniqueName: \"kubernetes.io/projected/d1fd4cb9-f562-48b3-b829-55f48fc8a414-kube-api-access-l2r25\") pod \"keystone-operator-controller-manager-767fdc4f47-rklqr\" (UID: \"d1fd4cb9-f562-48b3-b829-55f48fc8a414\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.553150 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.553187 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl2lb\" (UniqueName: \"kubernetes.io/projected/6dd75ffe-4c90-493d-b5af-313056532562-kube-api-access-sl2lb\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.553209 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwwl\" (UniqueName: \"kubernetes.io/projected/41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14-kube-api-access-9pwwl\") pod \"ironic-operator-controller-manager-78757b4889-xvfb4\" (UID: \"41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" Jan 21 15:49:39 crc kubenswrapper[4890]: E0121 15:49:39.556157 4890 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:39 crc kubenswrapper[4890]: E0121 15:49:39.556227 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert podName:6dd75ffe-4c90-493d-b5af-313056532562 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:40.056209911 +0000 UTC m=+1062.417652390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert") pod "infra-operator-controller-manager-77c48c7859-bbwtr" (UID: "6dd75ffe-4c90-493d-b5af-313056532562") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.565087 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.567676 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.568861 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.568930 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.600471 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zhj5x" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.601877 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.609863 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.635482 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.654203 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwwl\" (UniqueName: \"kubernetes.io/projected/41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14-kube-api-access-9pwwl\") pod \"ironic-operator-controller-manager-78757b4889-xvfb4\" (UID: \"41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.660101 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl2lb\" (UniqueName: \"kubernetes.io/projected/6dd75ffe-4c90-493d-b5af-313056532562-kube-api-access-sl2lb\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.678962 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww88c\" (UniqueName: \"kubernetes.io/projected/cb112e2e-1c3b-4701-87ec-dee15131d2a9-kube-api-access-ww88c\") pod \"neutron-operator-controller-manager-cb4666565-wmm44\" (UID: \"cb112e2e-1c3b-4701-87ec-dee15131d2a9\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.679260 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2ld\" (UniqueName: \"kubernetes.io/projected/ab7d4301-6caa-4a1e-a634-e4a355271b68-kube-api-access-2h2ld\") pod \"manila-operator-controller-manager-864f6b75bf-jcfgv\" (UID: \"ab7d4301-6caa-4a1e-a634-e4a355271b68\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.680221 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfh5s\" (UniqueName: \"kubernetes.io/projected/88bf9325-183a-4b37-8278-fdf6a95edf3c-kube-api-access-hfh5s\") pod \"mariadb-operator-controller-manager-c87fff755-nfvzq\" (UID: \"88bf9325-183a-4b37-8278-fdf6a95edf3c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.701227 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2r25\" (UniqueName: \"kubernetes.io/projected/d1fd4cb9-f562-48b3-b829-55f48fc8a414-kube-api-access-l2r25\") pod \"keystone-operator-controller-manager-767fdc4f47-rklqr\" (UID: \"d1fd4cb9-f562-48b3-b829-55f48fc8a414\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.731441 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.733177 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.763896 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tnvxw" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.767680 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.786787 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.787555 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww88c\" (UniqueName: \"kubernetes.io/projected/cb112e2e-1c3b-4701-87ec-dee15131d2a9-kube-api-access-ww88c\") pod \"neutron-operator-controller-manager-cb4666565-wmm44\" (UID: \"cb112e2e-1c3b-4701-87ec-dee15131d2a9\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.788437 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2ld\" (UniqueName: \"kubernetes.io/projected/ab7d4301-6caa-4a1e-a634-e4a355271b68-kube-api-access-2h2ld\") pod \"manila-operator-controller-manager-864f6b75bf-jcfgv\" (UID: \"ab7d4301-6caa-4a1e-a634-e4a355271b68\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.788937 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfh5s\" (UniqueName: \"kubernetes.io/projected/88bf9325-183a-4b37-8278-fdf6a95edf3c-kube-api-access-hfh5s\") pod \"mariadb-operator-controller-manager-c87fff755-nfvzq\" (UID: \"88bf9325-183a-4b37-8278-fdf6a95edf3c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.810122 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.834853 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.855869 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww88c\" (UniqueName: \"kubernetes.io/projected/cb112e2e-1c3b-4701-87ec-dee15131d2a9-kube-api-access-ww88c\") pod \"neutron-operator-controller-manager-cb4666565-wmm44\" (UID: \"cb112e2e-1c3b-4701-87ec-dee15131d2a9\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.863426 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.895999 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v852p\" (UniqueName: \"kubernetes.io/projected/eeba017a-ca09-444f-a4c6-895ec31b914b-kube-api-access-v852p\") pod \"nova-operator-controller-manager-65849867d6-4lhwm\" (UID: \"eeba017a-ca09-444f-a4c6-895ec31b914b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.917421 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.918463 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.934981 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-l2d5l" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.956859 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.969620 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.970767 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.970966 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.974437 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.975490 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.988433 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw"] Jan 21 15:49:39 crc kubenswrapper[4890]: I0121 15:49:39.998956 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v852p\" (UniqueName: \"kubernetes.io/projected/eeba017a-ca09-444f-a4c6-895ec31b914b-kube-api-access-v852p\") pod \"nova-operator-controller-manager-65849867d6-4lhwm\" (UID: \"eeba017a-ca09-444f-a4c6-895ec31b914b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.003949 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zv5p4" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.004534 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-m29zg" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.004626 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.011482 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.012854 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.024578 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zx694" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.042528 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.075151 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.080334 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v852p\" (UniqueName: \"kubernetes.io/projected/eeba017a-ca09-444f-a4c6-895ec31b914b-kube-api-access-v852p\") pod \"nova-operator-controller-manager-65849867d6-4lhwm\" (UID: \"eeba017a-ca09-444f-a4c6-895ec31b914b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.100440 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.100482 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrxs\" (UniqueName: \"kubernetes.io/projected/4950c09f-4cbd-49e4-906f-e4451c610111-kube-api-access-wgrxs\") pod \"octavia-operator-controller-manager-7fc9b76cf6-nwlnf\" (UID: \"4950c09f-4cbd-49e4-906f-e4451c610111\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.100530 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.100550 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bks6f\" (UniqueName: \"kubernetes.io/projected/daa2bbb5-55a8-4920-9109-45bcd643bd9f-kube-api-access-bks6f\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.100582 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44m9d\" (UniqueName: \"kubernetes.io/projected/f176bee8-4c10-4d65-bd9c-5e95bdc707c6-kube-api-access-44m9d\") pod \"ovn-operator-controller-manager-55db956ddc-9nwtw\" (UID: \"f176bee8-4c10-4d65-bd9c-5e95bdc707c6\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.100708 4890 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.100750 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert podName:6dd75ffe-4c90-493d-b5af-313056532562 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:41.100734928 +0000 UTC m=+1063.462177337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert") pod "infra-operator-controller-manager-77c48c7859-bbwtr" (UID: "6dd75ffe-4c90-493d-b5af-313056532562") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.104468 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.105438 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.123636 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.129227 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zxxsg" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.129481 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.144182 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.170067 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.171177 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.174431 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nsbpd" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.188613 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.198489 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.199804 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.201567 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.201630 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgrxs\" (UniqueName: \"kubernetes.io/projected/4950c09f-4cbd-49e4-906f-e4451c610111-kube-api-access-wgrxs\") pod \"octavia-operator-controller-manager-7fc9b76cf6-nwlnf\" (UID: \"4950c09f-4cbd-49e4-906f-e4451c610111\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.201690 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bks6f\" (UniqueName: \"kubernetes.io/projected/daa2bbb5-55a8-4920-9109-45bcd643bd9f-kube-api-access-bks6f\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.201730 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjnbl\" (UniqueName: \"kubernetes.io/projected/59304f72-3a8b-460b-989d-706b9e898d76-kube-api-access-jjnbl\") pod \"placement-operator-controller-manager-686df47fcb-8qwg4\" (UID: \"59304f72-3a8b-460b-989d-706b9e898d76\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.201749 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44m9d\" (UniqueName: \"kubernetes.io/projected/f176bee8-4c10-4d65-bd9c-5e95bdc707c6-kube-api-access-44m9d\") pod \"ovn-operator-controller-manager-55db956ddc-9nwtw\" (UID: \"f176bee8-4c10-4d65-bd9c-5e95bdc707c6\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.202187 4890 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.202230 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert podName:daa2bbb5-55a8-4920-9109-45bcd643bd9f nodeName:}" failed. No retries permitted until 2026-01-21 15:49:40.70221475 +0000 UTC m=+1063.063657159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dzvgss" (UID: "daa2bbb5-55a8-4920-9109-45bcd643bd9f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.207403 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9f6jf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.219603 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.223418 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.224176 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.231770 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5vvwc" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.237161 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgrxs\" (UniqueName: \"kubernetes.io/projected/4950c09f-4cbd-49e4-906f-e4451c610111-kube-api-access-wgrxs\") pod \"octavia-operator-controller-manager-7fc9b76cf6-nwlnf\" (UID: \"4950c09f-4cbd-49e4-906f-e4451c610111\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.237200 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44m9d\" (UniqueName: \"kubernetes.io/projected/f176bee8-4c10-4d65-bd9c-5e95bdc707c6-kube-api-access-44m9d\") pod \"ovn-operator-controller-manager-55db956ddc-9nwtw\" (UID: \"f176bee8-4c10-4d65-bd9c-5e95bdc707c6\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.237944 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bks6f\" (UniqueName: \"kubernetes.io/projected/daa2bbb5-55a8-4920-9109-45bcd643bd9f-kube-api-access-bks6f\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.246018 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.272245 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.303333 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xfr\" (UniqueName: \"kubernetes.io/projected/105a410d-5ae8-44ad-9e48-e7cd00ae3c27-kube-api-access-96xfr\") pod \"telemetry-operator-controller-manager-5f8f495fcf-sqqwl\" (UID: \"105a410d-5ae8-44ad-9e48-e7cd00ae3c27\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.303461 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xmz\" (UniqueName: \"kubernetes.io/projected/03a78be9-bd85-449f-93b6-1379195280c0-kube-api-access-m7xmz\") pod \"test-operator-controller-manager-7cd8bc9dbb-cckzf\" (UID: \"03a78be9-bd85-449f-93b6-1379195280c0\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.304238 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t8g4\" (UniqueName: \"kubernetes.io/projected/acf11348-ddc8-494c-8ecc-ad1f5f44366f-kube-api-access-2t8g4\") pod \"swift-operator-controller-manager-85dd56d4cc-6h8pt\" (UID: \"acf11348-ddc8-494c-8ecc-ad1f5f44366f\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.304366 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjnbl\" (UniqueName: \"kubernetes.io/projected/59304f72-3a8b-460b-989d-706b9e898d76-kube-api-access-jjnbl\") pod \"placement-operator-controller-manager-686df47fcb-8qwg4\" (UID: \"59304f72-3a8b-460b-989d-706b9e898d76\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.347954 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjnbl\" (UniqueName: \"kubernetes.io/projected/59304f72-3a8b-460b-989d-706b9e898d76-kube-api-access-jjnbl\") pod \"placement-operator-controller-manager-686df47fcb-8qwg4\" (UID: \"59304f72-3a8b-460b-989d-706b9e898d76\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.370730 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.406112 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t8g4\" (UniqueName: \"kubernetes.io/projected/acf11348-ddc8-494c-8ecc-ad1f5f44366f-kube-api-access-2t8g4\") pod \"swift-operator-controller-manager-85dd56d4cc-6h8pt\" (UID: \"acf11348-ddc8-494c-8ecc-ad1f5f44366f\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.406205 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96xfr\" (UniqueName: \"kubernetes.io/projected/105a410d-5ae8-44ad-9e48-e7cd00ae3c27-kube-api-access-96xfr\") pod \"telemetry-operator-controller-manager-5f8f495fcf-sqqwl\" (UID: \"105a410d-5ae8-44ad-9e48-e7cd00ae3c27\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.406255 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npkcj\" (UniqueName: \"kubernetes.io/projected/435e0e5d-88a8-4737-aa7d-cefffc292c23-kube-api-access-npkcj\") pod \"watcher-operator-controller-manager-64cd966744-x2dl5\" (UID: \"435e0e5d-88a8-4737-aa7d-cefffc292c23\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.406295 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xmz\" (UniqueName: \"kubernetes.io/projected/03a78be9-bd85-449f-93b6-1379195280c0-kube-api-access-m7xmz\") pod \"test-operator-controller-manager-7cd8bc9dbb-cckzf\" (UID: \"03a78be9-bd85-449f-93b6-1379195280c0\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.459179 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t8g4\" (UniqueName: \"kubernetes.io/projected/acf11348-ddc8-494c-8ecc-ad1f5f44366f-kube-api-access-2t8g4\") pod \"swift-operator-controller-manager-85dd56d4cc-6h8pt\" (UID: \"acf11348-ddc8-494c-8ecc-ad1f5f44366f\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.463810 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xmz\" (UniqueName: \"kubernetes.io/projected/03a78be9-bd85-449f-93b6-1379195280c0-kube-api-access-m7xmz\") pod \"test-operator-controller-manager-7cd8bc9dbb-cckzf\" (UID: \"03a78be9-bd85-449f-93b6-1379195280c0\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.472996 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.474828 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.475810 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96xfr\" (UniqueName: \"kubernetes.io/projected/105a410d-5ae8-44ad-9e48-e7cd00ae3c27-kube-api-access-96xfr\") pod \"telemetry-operator-controller-manager-5f8f495fcf-sqqwl\" (UID: \"105a410d-5ae8-44ad-9e48-e7cd00ae3c27\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.486806 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.487587 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lldmr" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.487956 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.510265 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npkcj\" (UniqueName: \"kubernetes.io/projected/435e0e5d-88a8-4737-aa7d-cefffc292c23-kube-api-access-npkcj\") pod \"watcher-operator-controller-manager-64cd966744-x2dl5\" (UID: \"435e0e5d-88a8-4737-aa7d-cefffc292c23\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.535859 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.539704 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.612704 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.615124 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.618731 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.619872 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgfwh\" (UniqueName: \"kubernetes.io/projected/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-kube-api-access-mgfwh\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.620216 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.620411 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.626889 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npkcj\" (UniqueName: \"kubernetes.io/projected/435e0e5d-88a8-4737-aa7d-cefffc292c23-kube-api-access-npkcj\") pod \"watcher-operator-controller-manager-64cd966744-x2dl5\" (UID: \"435e0e5d-88a8-4737-aa7d-cefffc292c23\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.626969 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.628221 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.631999 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-2vrqz" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.643254 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm"] Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.715801 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.727494 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgfwh\" (UniqueName: \"kubernetes.io/projected/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-kube-api-access-mgfwh\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.734049 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.734389 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.734703 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.735282 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rgrj\" (UniqueName: \"kubernetes.io/projected/cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f-kube-api-access-6rgrj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8rlhm\" (UID: \"cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.735834 4890 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.735957 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:41.235942651 +0000 UTC m=+1063.597385060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "metrics-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.736239 4890 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.737204 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:41.237194132 +0000 UTC m=+1063.598636541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.736659 4890 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: E0121 15:49:40.738468 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert podName:daa2bbb5-55a8-4920-9109-45bcd643bd9f nodeName:}" failed. No retries permitted until 2026-01-21 15:49:41.738458093 +0000 UTC m=+1064.099900502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dzvgss" (UID: "daa2bbb5-55a8-4920-9109-45bcd643bd9f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.766824 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgfwh\" (UniqueName: \"kubernetes.io/projected/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-kube-api-access-mgfwh\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.838028 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rgrj\" (UniqueName: \"kubernetes.io/projected/cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f-kube-api-access-6rgrj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8rlhm\" (UID: \"cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.861814 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rgrj\" (UniqueName: \"kubernetes.io/projected/cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f-kube-api-access-6rgrj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8rlhm\" (UID: \"cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" Jan 21 15:49:40 crc kubenswrapper[4890]: I0121 15:49:40.918032 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.080203 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.108718 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.144615 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.144827 4890 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.144964 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert podName:6dd75ffe-4c90-493d-b5af-313056532562 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:43.144919556 +0000 UTC m=+1065.506362015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert") pod "infra-operator-controller-manager-77c48c7859-bbwtr" (UID: "6dd75ffe-4c90-493d-b5af-313056532562") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.245578 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.245627 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.245810 4890 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.245864 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:42.245847885 +0000 UTC m=+1064.607290294 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "webhook-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.246328 4890 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.246372 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:42.246362978 +0000 UTC m=+1064.607805387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "metrics-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.325603 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" event={"ID":"4319998f-d413-4412-bffd-7123d46bce19","Type":"ContainerStarted","Data":"039617909a911208d737169422c5a84971dcd7e3b1bf600f4820ecfac59b6c0e"} Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.331509 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" event={"ID":"0f4bb54d-23a1-4b41-995f-d7affd9cd504","Type":"ContainerStarted","Data":"ffc1dd38b70a308df6aba21d83a7d0633c854fe1bf84c993fd88afa82a554628"} Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.572109 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h6r95"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.577972 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.583804 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.603001 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44"] Jan 21 15:49:41 crc kubenswrapper[4890]: W0121 15:49:41.628591 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb112e2e_1c3b_4701_87ec_dee15131d2a9.slice/crio-5758fc8326ab21193b12b353edbee92508259dfe366617a692543c6ab1e25019 WatchSource:0}: Error finding container 5758fc8326ab21193b12b353edbee92508259dfe366617a692543c6ab1e25019: Status 404 returned error can't find the container with id 5758fc8326ab21193b12b353edbee92508259dfe366617a692543c6ab1e25019 Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.649657 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.666699 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm"] Jan 21 15:49:41 crc kubenswrapper[4890]: W0121 15:49:41.674083 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8791802a_0f5d_4d66_a19b_bf2b373ddd56.slice/crio-285a8bea1f7182de9568d509f0abab39f9fd01ff4c9bf96eb809ed5ac8e73232 WatchSource:0}: Error finding container 285a8bea1f7182de9568d509f0abab39f9fd01ff4c9bf96eb809ed5ac8e73232: Status 404 returned error can't find the container with id 285a8bea1f7182de9568d509f0abab39f9fd01ff4c9bf96eb809ed5ac8e73232 Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.674137 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.683873 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.694166 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.757271 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.757461 4890 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.757528 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert podName:daa2bbb5-55a8-4920-9109-45bcd643bd9f nodeName:}" failed. No retries permitted until 2026-01-21 15:49:43.757511122 +0000 UTC m=+1066.118953531 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dzvgss" (UID: "daa2bbb5-55a8-4920-9109-45bcd643bd9f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.775020 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.812052 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4"] Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.824462 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5"] Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.829897 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjnbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-8qwg4_openstack-operators(59304f72-3a8b-460b-989d-706b9e898d76): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.831895 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" podUID="59304f72-3a8b-460b-989d-706b9e898d76" Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.836946 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf"] Jan 21 15:49:41 crc kubenswrapper[4890]: W0121 15:49:41.838746 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4950c09f_4cbd_49e4_906f_e4451c610111.slice/crio-bad01e60e27154dd20676bc4a92d6ef38108ec929cdc58646f8980d021e9b378 WatchSource:0}: Error finding container bad01e60e27154dd20676bc4a92d6ef38108ec929cdc58646f8980d021e9b378: Status 404 returned error can't find the container with id bad01e60e27154dd20676bc4a92d6ef38108ec929cdc58646f8980d021e9b378 Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.846920 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgrxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-nwlnf_openstack-operators(4950c09f-4cbd-49e4-906f-e4451c610111): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:49:41 crc kubenswrapper[4890]: E0121 15:49:41.848247 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" podUID="4950c09f-4cbd-49e4-906f-e4451c610111" Jan 21 15:49:41 crc kubenswrapper[4890]: I0121 15:49:41.988517 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl"] Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.018308 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2t8g4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-6h8pt_openstack-operators(acf11348-ddc8-494c-8ecc-ad1f5f44366f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.019837 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt"] Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.019970 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" podUID="acf11348-ddc8-494c-8ecc-ad1f5f44366f" Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.020046 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7xmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-cckzf_openstack-operators(03a78be9-bd85-449f-93b6-1379195280c0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.021110 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" podUID="03a78be9-bd85-449f-93b6-1379195280c0" Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.032326 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hfh5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-nfvzq_openstack-operators(88bf9325-183a-4b37-8278-fdf6a95edf3c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.033446 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" podUID="88bf9325-183a-4b37-8278-fdf6a95edf3c" Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.037173 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rgrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8rlhm_openstack-operators(cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.039290 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" podUID="cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.042619 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf"] Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.054485 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm"] Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.061825 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq"] Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.266843 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.266889 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.267031 4890 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.267077 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:44.267063806 +0000 UTC m=+1066.628506215 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "webhook-server-cert" not found Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.267401 4890 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.267428 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:44.267420574 +0000 UTC m=+1066.628862983 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "metrics-server-cert" not found Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.343825 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" event={"ID":"8791802a-0f5d-4d66-a19b-bf2b373ddd56","Type":"ContainerStarted","Data":"285a8bea1f7182de9568d509f0abab39f9fd01ff4c9bf96eb809ed5ac8e73232"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.346024 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" event={"ID":"eeba017a-ca09-444f-a4c6-895ec31b914b","Type":"ContainerStarted","Data":"81dabff2d4bc5dfa11e301f88990a1801a875e698324044b07173f7a8de2a9a6"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.347609 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" event={"ID":"88bf9325-183a-4b37-8278-fdf6a95edf3c","Type":"ContainerStarted","Data":"fdbc7764cee7f26a17a81c880fb4e2e44af0a107f51d2fb274abafd28ed9acdd"} Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.350286 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" podUID="88bf9325-183a-4b37-8278-fdf6a95edf3c" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.350606 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" event={"ID":"acf11348-ddc8-494c-8ecc-ad1f5f44366f","Type":"ContainerStarted","Data":"184edac816e7feef3bfb3682b97e2645d183f5945656562bdb75363ef1fecc33"} Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.352848 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" podUID="acf11348-ddc8-494c-8ecc-ad1f5f44366f" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.357560 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" event={"ID":"41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14","Type":"ContainerStarted","Data":"57068fdc3cd25e5157f9aa804ae244189140fad50d4402b1e00055887c1b25e6"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.362552 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" event={"ID":"cb112e2e-1c3b-4701-87ec-dee15131d2a9","Type":"ContainerStarted","Data":"5758fc8326ab21193b12b353edbee92508259dfe366617a692543c6ab1e25019"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.371670 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" event={"ID":"cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f","Type":"ContainerStarted","Data":"18bf97a2b3e37bb791c2ebb1a9d554a4bf91494bdaadac48a84c125a0a0ec550"} Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.376537 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" podUID="cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.405913 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" event={"ID":"2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538","Type":"ContainerStarted","Data":"acb8e82981932a5afb364c06ce3240b0dd8e6c8e62137cb999bfc5a4de03e8af"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.410879 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" event={"ID":"f176bee8-4c10-4d65-bd9c-5e95bdc707c6","Type":"ContainerStarted","Data":"81af55374239428e187a19eb6438691b48cd87d121ded36a0b0bd0ba536fff41"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.419583 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" event={"ID":"91df512f-6657-44f1-b643-c18778e5d159","Type":"ContainerStarted","Data":"f383ec99a38e85c49c552b559dbd3d7fe2aa4ebc3885d7624e30ebd1cda6536c"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.423475 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" event={"ID":"4950c09f-4cbd-49e4-906f-e4451c610111","Type":"ContainerStarted","Data":"bad01e60e27154dd20676bc4a92d6ef38108ec929cdc58646f8980d021e9b378"} Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.426482 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" podUID="4950c09f-4cbd-49e4-906f-e4451c610111" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.427202 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" event={"ID":"d1fd4cb9-f562-48b3-b829-55f48fc8a414","Type":"ContainerStarted","Data":"352ddd8716b6877f774fc14cd667b34126b7d6e1f1d985b45497f1a933f1cd4e"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.434532 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" event={"ID":"03a78be9-bd85-449f-93b6-1379195280c0","Type":"ContainerStarted","Data":"b3ede7ad4a53804021d2307d08fa9dd27a98eaa22d09ca09ed6b38681f2a40e5"} Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.438628 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" podUID="03a78be9-bd85-449f-93b6-1379195280c0" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.473064 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" event={"ID":"435e0e5d-88a8-4737-aa7d-cefffc292c23","Type":"ContainerStarted","Data":"8f420e25c0934e0e28a1aeb2d88cba91d73838f7481c13fc45ddfb4c5bdca813"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.476129 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" event={"ID":"59304f72-3a8b-460b-989d-706b9e898d76","Type":"ContainerStarted","Data":"73762765f2b0796e8fb4c91b260910c3df755d2fff7e318f7e03bad8271b917e"} Jan 21 15:49:42 crc kubenswrapper[4890]: E0121 15:49:42.477456 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" podUID="59304f72-3a8b-460b-989d-706b9e898d76" Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.481801 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" event={"ID":"44fcf69f-7131-43c4-9303-f5636c294644","Type":"ContainerStarted","Data":"eca75589fa82acee6d6b1f15b02a6181ef73e40956492317cc8576f055b8fec2"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.492533 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" event={"ID":"ab7d4301-6caa-4a1e-a634-e4a355271b68","Type":"ContainerStarted","Data":"981793d2024c9fc1dc3ffd73f324ff9413f57945deae087628a6af11a5e02ea7"} Jan 21 15:49:42 crc kubenswrapper[4890]: I0121 15:49:42.498305 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" event={"ID":"105a410d-5ae8-44ad-9e48-e7cd00ae3c27","Type":"ContainerStarted","Data":"28e5cd4aa0bfbc1a365344ba9c53b1a84d3a1cb59dbec4ee39a62148c3e8cd27"} Jan 21 15:49:43 crc kubenswrapper[4890]: I0121 15:49:43.190597 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.191116 4890 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.191170 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert podName:6dd75ffe-4c90-493d-b5af-313056532562 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:47.191156993 +0000 UTC m=+1069.552599402 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert") pod "infra-operator-controller-manager-77c48c7859-bbwtr" (UID: "6dd75ffe-4c90-493d-b5af-313056532562") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.523239 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" podUID="59304f72-3a8b-460b-989d-706b9e898d76" Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.523325 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" podUID="4950c09f-4cbd-49e4-906f-e4451c610111" Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.523561 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" podUID="cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f" Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.523770 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" podUID="03a78be9-bd85-449f-93b6-1379195280c0" Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.523793 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" podUID="acf11348-ddc8-494c-8ecc-ad1f5f44366f" Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.527213 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" podUID="88bf9325-183a-4b37-8278-fdf6a95edf3c" Jan 21 15:49:43 crc kubenswrapper[4890]: I0121 15:49:43.803156 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.803541 4890 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:43 crc kubenswrapper[4890]: E0121 15:49:43.803777 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert podName:daa2bbb5-55a8-4920-9109-45bcd643bd9f nodeName:}" failed. No retries permitted until 2026-01-21 15:49:47.803735058 +0000 UTC m=+1070.165177467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dzvgss" (UID: "daa2bbb5-55a8-4920-9109-45bcd643bd9f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:44 crc kubenswrapper[4890]: I0121 15:49:44.315477 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:44 crc kubenswrapper[4890]: I0121 15:49:44.315557 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:44 crc kubenswrapper[4890]: E0121 15:49:44.315773 4890 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:49:44 crc kubenswrapper[4890]: E0121 15:49:44.315849 4890 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:49:44 crc kubenswrapper[4890]: E0121 15:49:44.315883 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:48.315857986 +0000 UTC m=+1070.677300575 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "metrics-server-cert" not found Jan 21 15:49:44 crc kubenswrapper[4890]: E0121 15:49:44.315932 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:48.315908708 +0000 UTC m=+1070.677351117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "webhook-server-cert" not found Jan 21 15:49:47 crc kubenswrapper[4890]: I0121 15:49:47.266668 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:47 crc kubenswrapper[4890]: E0121 15:49:47.266876 4890 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:47 crc kubenswrapper[4890]: E0121 15:49:47.267140 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert podName:6dd75ffe-4c90-493d-b5af-313056532562 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:55.26711938 +0000 UTC m=+1077.628561789 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert") pod "infra-operator-controller-manager-77c48c7859-bbwtr" (UID: "6dd75ffe-4c90-493d-b5af-313056532562") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:47 crc kubenswrapper[4890]: I0121 15:49:47.875953 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:47 crc kubenswrapper[4890]: E0121 15:49:47.876125 4890 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:47 crc kubenswrapper[4890]: E0121 15:49:47.876223 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert podName:daa2bbb5-55a8-4920-9109-45bcd643bd9f nodeName:}" failed. No retries permitted until 2026-01-21 15:49:55.876200119 +0000 UTC m=+1078.237642568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dzvgss" (UID: "daa2bbb5-55a8-4920-9109-45bcd643bd9f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:49:48 crc kubenswrapper[4890]: I0121 15:49:48.382306 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:48 crc kubenswrapper[4890]: I0121 15:49:48.382384 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:48 crc kubenswrapper[4890]: E0121 15:49:48.382607 4890 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:49:48 crc kubenswrapper[4890]: E0121 15:49:48.382659 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:56.382641177 +0000 UTC m=+1078.744083586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "webhook-server-cert" not found Jan 21 15:49:48 crc kubenswrapper[4890]: E0121 15:49:48.382671 4890 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:49:48 crc kubenswrapper[4890]: E0121 15:49:48.382753 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:56.382730439 +0000 UTC m=+1078.744172918 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "metrics-server-cert" not found Jan 21 15:49:54 crc kubenswrapper[4890]: E0121 15:49:54.342426 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 15:49:54 crc kubenswrapper[4890]: E0121 15:49:54.343015 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2f6kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-l7ndf_openstack-operators(4319998f-d413-4412-bffd-7123d46bce19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:49:54 crc kubenswrapper[4890]: E0121 15:49:54.344543 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" podUID="4319998f-d413-4412-bffd-7123d46bce19" Jan 21 15:49:54 crc kubenswrapper[4890]: E0121 15:49:54.601666 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" podUID="4319998f-d413-4412-bffd-7123d46bce19" Jan 21 15:49:55 crc kubenswrapper[4890]: E0121 15:49:55.081092 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 15:49:55 crc kubenswrapper[4890]: E0121 15:49:55.081708 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jt7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-v7zt4_openstack-operators(8791802a-0f5d-4d66-a19b-bf2b373ddd56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:49:55 crc kubenswrapper[4890]: E0121 15:49:55.082986 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" podUID="8791802a-0f5d-4d66-a19b-bf2b373ddd56" Jan 21 15:49:55 crc kubenswrapper[4890]: I0121 15:49:55.295050 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:49:55 crc kubenswrapper[4890]: E0121 15:49:55.295264 4890 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:55 crc kubenswrapper[4890]: E0121 15:49:55.295377 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert podName:6dd75ffe-4c90-493d-b5af-313056532562 nodeName:}" failed. No retries permitted until 2026-01-21 15:50:11.295337463 +0000 UTC m=+1093.656779912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert") pod "infra-operator-controller-manager-77c48c7859-bbwtr" (UID: "6dd75ffe-4c90-493d-b5af-313056532562") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:49:55 crc kubenswrapper[4890]: E0121 15:49:55.625253 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" podUID="8791802a-0f5d-4d66-a19b-bf2b373ddd56" Jan 21 15:49:55 crc kubenswrapper[4890]: I0121 15:49:55.904863 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:55 crc kubenswrapper[4890]: I0121 15:49:55.914763 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/daa2bbb5-55a8-4920-9109-45bcd643bd9f-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dzvgss\" (UID: \"daa2bbb5-55a8-4920-9109-45bcd643bd9f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:56 crc kubenswrapper[4890]: I0121 15:49:56.115786 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:49:56 crc kubenswrapper[4890]: I0121 15:49:56.412955 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:56 crc kubenswrapper[4890]: I0121 15:49:56.413134 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:49:56 crc kubenswrapper[4890]: E0121 15:49:56.413160 4890 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:49:56 crc kubenswrapper[4890]: E0121 15:49:56.413247 4890 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:49:56 crc kubenswrapper[4890]: E0121 15:49:56.413254 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:50:12.413233028 +0000 UTC m=+1094.774675437 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "webhook-server-cert" not found Jan 21 15:49:56 crc kubenswrapper[4890]: E0121 15:49:56.413308 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs podName:41b6e8d7-5b8e-4953-bb8c-af061e0fda60 nodeName:}" failed. No retries permitted until 2026-01-21 15:50:12.41329675 +0000 UTC m=+1094.774739159 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-tqb8d" (UID: "41b6e8d7-5b8e-4953-bb8c-af061e0fda60") : secret "metrics-server-cert" not found Jan 21 15:50:05 crc kubenswrapper[4890]: E0121 15:50:03.897124 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 21 15:50:05 crc kubenswrapper[4890]: E0121 15:50:03.897936 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h659r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-h6r95_openstack-operators(44fcf69f-7131-43c4-9303-f5636c294644): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:50:05 crc kubenswrapper[4890]: E0121 15:50:03.899260 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" podUID="44fcf69f-7131-43c4-9303-f5636c294644" Jan 21 15:50:05 crc kubenswrapper[4890]: E0121 15:50:04.687224 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" podUID="44fcf69f-7131-43c4-9303-f5636c294644" Jan 21 15:50:08 crc kubenswrapper[4890]: E0121 15:50:08.277708 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 15:50:08 crc kubenswrapper[4890]: E0121 15:50:08.278245 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96xfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-sqqwl_openstack-operators(105a410d-5ae8-44ad-9e48-e7cd00ae3c27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:50:08 crc kubenswrapper[4890]: E0121 15:50:08.279438 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" podUID="105a410d-5ae8-44ad-9e48-e7cd00ae3c27" Jan 21 15:50:08 crc kubenswrapper[4890]: E0121 15:50:08.713905 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" podUID="105a410d-5ae8-44ad-9e48-e7cd00ae3c27" Jan 21 15:50:09 crc kubenswrapper[4890]: E0121 15:50:09.906710 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 15:50:09 crc kubenswrapper[4890]: E0121 15:50:09.906937 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-f92ld_openstack-operators(2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:50:09 crc kubenswrapper[4890]: E0121 15:50:09.909576 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" podUID="2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.026255 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.026590 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8f4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-pqzjj_openstack-operators(91df512f-6657-44f1-b643-c18778e5d159): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.028029 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" podUID="91df512f-6657-44f1-b643-c18778e5d159" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.730785 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" podUID="2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.732396 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" podUID="91df512f-6657-44f1-b643-c18778e5d159" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.886455 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.886788 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2h2ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-jcfgv_openstack-operators(ab7d4301-6caa-4a1e-a634-e4a355271b68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:50:10 crc kubenswrapper[4890]: E0121 15:50:10.887901 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" podUID="ab7d4301-6caa-4a1e-a634-e4a355271b68" Jan 21 15:50:11 crc kubenswrapper[4890]: I0121 15:50:11.391869 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:50:11 crc kubenswrapper[4890]: I0121 15:50:11.401775 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dd75ffe-4c90-493d-b5af-313056532562-cert\") pod \"infra-operator-controller-manager-77c48c7859-bbwtr\" (UID: \"6dd75ffe-4c90-493d-b5af-313056532562\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:50:11 crc kubenswrapper[4890]: I0121 15:50:11.470287 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gv92d" Jan 21 15:50:11 crc kubenswrapper[4890]: I0121 15:50:11.477739 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:50:11 crc kubenswrapper[4890]: E0121 15:50:11.735385 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" podUID="ab7d4301-6caa-4a1e-a634-e4a355271b68" Jan 21 15:50:12 crc kubenswrapper[4890]: E0121 15:50:12.050597 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 15:50:12 crc kubenswrapper[4890]: E0121 15:50:12.050943 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2r25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-rklqr_openstack-operators(d1fd4cb9-f562-48b3-b829-55f48fc8a414): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:50:12 crc kubenswrapper[4890]: E0121 15:50:12.052334 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" podUID="d1fd4cb9-f562-48b3-b829-55f48fc8a414" Jan 21 15:50:12 crc kubenswrapper[4890]: I0121 15:50:12.509725 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:50:12 crc kubenswrapper[4890]: I0121 15:50:12.509802 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:50:12 crc kubenswrapper[4890]: I0121 15:50:12.515252 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:50:12 crc kubenswrapper[4890]: I0121 15:50:12.515281 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41b6e8d7-5b8e-4953-bb8c-af061e0fda60-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-tqb8d\" (UID: \"41b6e8d7-5b8e-4953-bb8c-af061e0fda60\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:50:12 crc kubenswrapper[4890]: I0121 15:50:12.565847 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lldmr" Jan 21 15:50:12 crc kubenswrapper[4890]: I0121 15:50:12.574461 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:50:12 crc kubenswrapper[4890]: E0121 15:50:12.956154 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 15:50:12 crc kubenswrapper[4890]: E0121 15:50:12.956781 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v852p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-4lhwm_openstack-operators(eeba017a-ca09-444f-a4c6-895ec31b914b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:50:12 crc kubenswrapper[4890]: E0121 15:50:12.957991 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" podUID="eeba017a-ca09-444f-a4c6-895ec31b914b" Jan 21 15:50:13 crc kubenswrapper[4890]: E0121 15:50:13.017636 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" podUID="d1fd4cb9-f562-48b3-b829-55f48fc8a414" Jan 21 15:50:13 crc kubenswrapper[4890]: E0121 15:50:13.752332 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" podUID="eeba017a-ca09-444f-a4c6-895ec31b914b" Jan 21 15:50:19 crc kubenswrapper[4890]: I0121 15:50:19.391924 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr"] Jan 21 15:50:19 crc kubenswrapper[4890]: W0121 15:50:19.417012 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dd75ffe_4c90_493d_b5af_313056532562.slice/crio-e427cbde9b1e112d1d242f5df1c37552bd41c2fbfc2392b3f6e7a61f77870628 WatchSource:0}: Error finding container e427cbde9b1e112d1d242f5df1c37552bd41c2fbfc2392b3f6e7a61f77870628: Status 404 returned error can't find the container with id e427cbde9b1e112d1d242f5df1c37552bd41c2fbfc2392b3f6e7a61f77870628 Jan 21 15:50:19 crc kubenswrapper[4890]: I0121 15:50:19.422455 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:50:19 crc kubenswrapper[4890]: W0121 15:50:19.443826 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaa2bbb5_55a8_4920_9109_45bcd643bd9f.slice/crio-10f069d65fc2ab0a46de3dd6e2ebe626d1319ddc55c1da1919083e48da1decf8 WatchSource:0}: Error finding container 10f069d65fc2ab0a46de3dd6e2ebe626d1319ddc55c1da1919083e48da1decf8: Status 404 returned error can't find the container with id 10f069d65fc2ab0a46de3dd6e2ebe626d1319ddc55c1da1919083e48da1decf8 Jan 21 15:50:19 crc kubenswrapper[4890]: I0121 15:50:19.440793 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss"] Jan 21 15:50:19 crc kubenswrapper[4890]: I0121 15:50:19.493398 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d"] Jan 21 15:50:19 crc kubenswrapper[4890]: W0121 15:50:19.500805 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41b6e8d7_5b8e_4953_bb8c_af061e0fda60.slice/crio-1e1367dd206d051b4d443df9691e0a753459685d5e888d9889c752c56ca0a967 WatchSource:0}: Error finding container 1e1367dd206d051b4d443df9691e0a753459685d5e888d9889c752c56ca0a967: Status 404 returned error can't find the container with id 1e1367dd206d051b4d443df9691e0a753459685d5e888d9889c752c56ca0a967 Jan 21 15:50:19 crc kubenswrapper[4890]: I0121 15:50:19.811074 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" event={"ID":"daa2bbb5-55a8-4920-9109-45bcd643bd9f","Type":"ContainerStarted","Data":"10f069d65fc2ab0a46de3dd6e2ebe626d1319ddc55c1da1919083e48da1decf8"} Jan 21 15:50:19 crc kubenswrapper[4890]: I0121 15:50:19.812646 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" event={"ID":"41b6e8d7-5b8e-4953-bb8c-af061e0fda60","Type":"ContainerStarted","Data":"1e1367dd206d051b4d443df9691e0a753459685d5e888d9889c752c56ca0a967"} Jan 21 15:50:19 crc kubenswrapper[4890]: I0121 15:50:19.814108 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" event={"ID":"6dd75ffe-4c90-493d-b5af-313056532562","Type":"ContainerStarted","Data":"e427cbde9b1e112d1d242f5df1c37552bd41c2fbfc2392b3f6e7a61f77870628"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.828049 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" event={"ID":"4950c09f-4cbd-49e4-906f-e4451c610111","Type":"ContainerStarted","Data":"1f9d1f2cf5be5e66c4a970a776260cea8c6676e1462f28e80b642b9d716c836e"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.828421 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.838335 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" event={"ID":"cb112e2e-1c3b-4701-87ec-dee15131d2a9","Type":"ContainerStarted","Data":"409d23575f5a51df91fe62426a8030401b7c3120ee3551229b8787a99c3f3676"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.838477 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.842683 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" event={"ID":"cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f","Type":"ContainerStarted","Data":"6ccb2061438748e713cf2b6b2ea25acde0c1a579e22627ae256262bc8ec68b7c"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.848947 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" event={"ID":"59304f72-3a8b-460b-989d-706b9e898d76","Type":"ContainerStarted","Data":"2871616b523d175240a9b8be26309f86c52e4f91e59e4bbe5de4843daec3c67f"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.849280 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.856386 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" event={"ID":"0f4bb54d-23a1-4b41-995f-d7affd9cd504","Type":"ContainerStarted","Data":"0bad00904d0b565c56ded2e50f990e576519187b6ff09f39de60f4ed1a0963cb"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.856493 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.879383 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" event={"ID":"88bf9325-183a-4b37-8278-fdf6a95edf3c","Type":"ContainerStarted","Data":"6a757c7e452ad0589b4288809d66790a3c20b83c3566004d3e0b53759a1feddd"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.879770 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.896819 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" event={"ID":"41b6e8d7-5b8e-4953-bb8c-af061e0fda60","Type":"ContainerStarted","Data":"3926b7d10881ad7eea9e610e83f8dec636cf55555e4ddaeb7119962c6e1588d0"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.896965 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.907933 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" event={"ID":"f176bee8-4c10-4d65-bd9c-5e95bdc707c6","Type":"ContainerStarted","Data":"997b2105d8be37fe9a00390f98bd47e6325d920c1dc14836f52f60f0fa22d204"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.908178 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.925279 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" event={"ID":"acf11348-ddc8-494c-8ecc-ad1f5f44366f","Type":"ContainerStarted","Data":"e02a6e56852c324b312bc6c71ce7550e34dd569f7c3f71ce9cef8bdb7292ee3e"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.925517 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.937583 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" event={"ID":"41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14","Type":"ContainerStarted","Data":"f7b25e6be3c80bd88afba8d5a048566846460dc2b0cec55c7d84c4ad0a9ed4ea"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.937730 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.952297 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" event={"ID":"03a78be9-bd85-449f-93b6-1379195280c0","Type":"ContainerStarted","Data":"9a287895702ea1c7edaf71605b846e722509fec978d8a5953b63ac9baec9ea04"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.953558 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.961890 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" event={"ID":"435e0e5d-88a8-4737-aa7d-cefffc292c23","Type":"ContainerStarted","Data":"543ce52f9a66539550334f0ddfa5207293f53a2e90449d1332c66e203ecc1c6f"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.962930 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.968815 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" event={"ID":"44fcf69f-7131-43c4-9303-f5636c294644","Type":"ContainerStarted","Data":"864074fda372223c60155ad2dfe9eb89df5eb0f2de310a724bf5223c4a168d3b"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.969475 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.970437 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" podStartSLOduration=4.931048668 podStartE2EDuration="41.970406777s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.846747782 +0000 UTC m=+1064.208190201" lastFinishedPulling="2026-01-21 15:50:18.886105901 +0000 UTC m=+1101.247548310" observedRunningTime="2026-01-21 15:50:20.966239095 +0000 UTC m=+1103.327681504" watchObservedRunningTime="2026-01-21 15:50:20.970406777 +0000 UTC m=+1103.331849186" Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.995741 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" event={"ID":"4319998f-d413-4412-bffd-7123d46bce19","Type":"ContainerStarted","Data":"2013c49a2ccba47d74c30efaf4cf04631327e90b6c0cf3b86d1f23a3c85d237f"} Jan 21 15:50:20 crc kubenswrapper[4890]: I0121 15:50:20.996866 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.011002 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" event={"ID":"8791802a-0f5d-4d66-a19b-bf2b373ddd56","Type":"ContainerStarted","Data":"4f2dadfd415bd4f458a8190bb739e449d79428e8b4e4fe7a5f8160fe74c84a88"} Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.012031 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.068918 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" podStartSLOduration=5.215688348 podStartE2EDuration="42.068884536s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:42.032127974 +0000 UTC m=+1064.393570383" lastFinishedPulling="2026-01-21 15:50:18.885324172 +0000 UTC m=+1101.246766571" observedRunningTime="2026-01-21 15:50:21.065330148 +0000 UTC m=+1103.426772557" watchObservedRunningTime="2026-01-21 15:50:21.068884536 +0000 UTC m=+1103.430326945" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.170216 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8rlhm" podStartSLOduration=4.122283974 podStartE2EDuration="41.170190474s" podCreationTimestamp="2026-01-21 15:49:40 +0000 UTC" firstStartedPulling="2026-01-21 15:49:42.036988353 +0000 UTC m=+1064.398430762" lastFinishedPulling="2026-01-21 15:50:19.084894853 +0000 UTC m=+1101.446337262" observedRunningTime="2026-01-21 15:50:21.166416401 +0000 UTC m=+1103.527858810" watchObservedRunningTime="2026-01-21 15:50:21.170190474 +0000 UTC m=+1103.531632883" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.245740 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" podStartSLOduration=10.510265653 podStartE2EDuration="42.245715466s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:40.966521887 +0000 UTC m=+1063.327964296" lastFinishedPulling="2026-01-21 15:50:12.70197169 +0000 UTC m=+1095.063414109" observedRunningTime="2026-01-21 15:50:21.243675256 +0000 UTC m=+1103.605117685" watchObservedRunningTime="2026-01-21 15:50:21.245715466 +0000 UTC m=+1103.607157875" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.303527 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" podStartSLOduration=11.301179985 podStartE2EDuration="42.303500591s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.699621154 +0000 UTC m=+1064.061063573" lastFinishedPulling="2026-01-21 15:50:12.70194177 +0000 UTC m=+1095.063384179" observedRunningTime="2026-01-21 15:50:21.30223666 +0000 UTC m=+1103.663679069" watchObservedRunningTime="2026-01-21 15:50:21.303500591 +0000 UTC m=+1103.664943000" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.337891 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" podStartSLOduration=5.469891485 podStartE2EDuration="42.337859788s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:42.016311783 +0000 UTC m=+1064.377754192" lastFinishedPulling="2026-01-21 15:50:18.884280096 +0000 UTC m=+1101.245722495" observedRunningTime="2026-01-21 15:50:21.333379748 +0000 UTC m=+1103.694822167" watchObservedRunningTime="2026-01-21 15:50:21.337859788 +0000 UTC m=+1103.699302197" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.357727 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" podStartSLOduration=5.303110984 podStartE2EDuration="42.357700458s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.829726443 +0000 UTC m=+1064.191168852" lastFinishedPulling="2026-01-21 15:50:18.884315927 +0000 UTC m=+1101.245758326" observedRunningTime="2026-01-21 15:50:21.355945414 +0000 UTC m=+1103.717387833" watchObservedRunningTime="2026-01-21 15:50:21.357700458 +0000 UTC m=+1103.719142867" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.426308 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" podStartSLOduration=41.426279669 podStartE2EDuration="41.426279669s" podCreationTimestamp="2026-01-21 15:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:21.420479836 +0000 UTC m=+1103.781922245" watchObservedRunningTime="2026-01-21 15:50:21.426279669 +0000 UTC m=+1103.787722098" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.455599 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" podStartSLOduration=8.761740716 podStartE2EDuration="41.455568011s" podCreationTimestamp="2026-01-21 15:49:40 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.829668961 +0000 UTC m=+1064.191111370" lastFinishedPulling="2026-01-21 15:50:14.523496246 +0000 UTC m=+1096.884938665" observedRunningTime="2026-01-21 15:50:21.447324727 +0000 UTC m=+1103.808767136" watchObservedRunningTime="2026-01-21 15:50:21.455568011 +0000 UTC m=+1103.817010420" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.515726 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" podStartSLOduration=9.68895009 podStartE2EDuration="42.515699933s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.695849241 +0000 UTC m=+1064.057291650" lastFinishedPulling="2026-01-21 15:50:14.522599084 +0000 UTC m=+1096.884041493" observedRunningTime="2026-01-21 15:50:21.512020832 +0000 UTC m=+1103.873463241" watchObservedRunningTime="2026-01-21 15:50:21.515699933 +0000 UTC m=+1103.877142352" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.567712 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" podStartSLOduration=8.912534415 podStartE2EDuration="42.567667624s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.633695399 +0000 UTC m=+1063.995137808" lastFinishedPulling="2026-01-21 15:50:15.288828608 +0000 UTC m=+1097.650271017" observedRunningTime="2026-01-21 15:50:21.556588801 +0000 UTC m=+1103.918031220" watchObservedRunningTime="2026-01-21 15:50:21.567667624 +0000 UTC m=+1103.929110033" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.586721 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" podStartSLOduration=4.735066734 podStartE2EDuration="41.586698753s" podCreationTimestamp="2026-01-21 15:49:40 +0000 UTC" firstStartedPulling="2026-01-21 15:49:42.019887962 +0000 UTC m=+1064.381330371" lastFinishedPulling="2026-01-21 15:50:18.871519981 +0000 UTC m=+1101.232962390" observedRunningTime="2026-01-21 15:50:21.58576029 +0000 UTC m=+1103.947202699" watchObservedRunningTime="2026-01-21 15:50:21.586698753 +0000 UTC m=+1103.948141162" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.624729 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" podStartSLOduration=4.927086009 podStartE2EDuration="42.62470017s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.173840559 +0000 UTC m=+1063.535282968" lastFinishedPulling="2026-01-21 15:50:18.87145472 +0000 UTC m=+1101.232897129" observedRunningTime="2026-01-21 15:50:21.623903951 +0000 UTC m=+1103.985346360" watchObservedRunningTime="2026-01-21 15:50:21.62470017 +0000 UTC m=+1103.986142589" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.657244 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" podStartSLOduration=5.205160866 podStartE2EDuration="42.657224372s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.581157883 +0000 UTC m=+1063.942600292" lastFinishedPulling="2026-01-21 15:50:19.033221389 +0000 UTC m=+1101.394663798" observedRunningTime="2026-01-21 15:50:21.654197648 +0000 UTC m=+1104.015640057" watchObservedRunningTime="2026-01-21 15:50:21.657224372 +0000 UTC m=+1104.018666771" Jan 21 15:50:21 crc kubenswrapper[4890]: I0121 15:50:21.681520 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" podStartSLOduration=5.488650387 podStartE2EDuration="42.681500671s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.678557825 +0000 UTC m=+1064.040000234" lastFinishedPulling="2026-01-21 15:50:18.871408109 +0000 UTC m=+1101.232850518" observedRunningTime="2026-01-21 15:50:21.679126342 +0000 UTC m=+1104.040568751" watchObservedRunningTime="2026-01-21 15:50:21.681500671 +0000 UTC m=+1104.042943070" Jan 21 15:50:22 crc kubenswrapper[4890]: I0121 15:50:22.050221 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" event={"ID":"105a410d-5ae8-44ad-9e48-e7cd00ae3c27","Type":"ContainerStarted","Data":"89fd40f89a4acf507dc4ce6a25c687ff5975689acd1da9205b2e34c647e2265f"} Jan 21 15:50:22 crc kubenswrapper[4890]: I0121 15:50:22.051117 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" Jan 21 15:50:22 crc kubenswrapper[4890]: I0121 15:50:22.102478 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" podStartSLOduration=2.387240581 podStartE2EDuration="42.102446301s" podCreationTimestamp="2026-01-21 15:49:40 +0000 UTC" firstStartedPulling="2026-01-21 15:49:42.005892507 +0000 UTC m=+1064.367334916" lastFinishedPulling="2026-01-21 15:50:21.721098227 +0000 UTC m=+1104.082540636" observedRunningTime="2026-01-21 15:50:22.085541924 +0000 UTC m=+1104.446984343" watchObservedRunningTime="2026-01-21 15:50:22.102446301 +0000 UTC m=+1104.463888710" Jan 21 15:50:23 crc kubenswrapper[4890]: I0121 15:50:23.064170 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" event={"ID":"91df512f-6657-44f1-b643-c18778e5d159","Type":"ContainerStarted","Data":"4540414c1abdf06c1556870c1104e8fa1ed52198b0c52c015620a9d5bbe124b3"} Jan 21 15:50:23 crc kubenswrapper[4890]: I0121 15:50:23.065401 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" Jan 21 15:50:23 crc kubenswrapper[4890]: I0121 15:50:23.067428 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" event={"ID":"2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538","Type":"ContainerStarted","Data":"330c165af05472a7ca64ed7795c216889481aab59c3e41299fba422edf88dec5"} Jan 21 15:50:23 crc kubenswrapper[4890]: I0121 15:50:23.067744 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" Jan 21 15:50:23 crc kubenswrapper[4890]: I0121 15:50:23.094793 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" podStartSLOduration=3.140173188 podStartE2EDuration="44.09475194s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.695511743 +0000 UTC m=+1064.056954152" lastFinishedPulling="2026-01-21 15:50:22.650090495 +0000 UTC m=+1105.011532904" observedRunningTime="2026-01-21 15:50:23.080339915 +0000 UTC m=+1105.441782324" watchObservedRunningTime="2026-01-21 15:50:23.09475194 +0000 UTC m=+1105.456194349" Jan 21 15:50:23 crc kubenswrapper[4890]: I0121 15:50:23.116670 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" podStartSLOduration=3.755605003 podStartE2EDuration="44.11664206s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.579587044 +0000 UTC m=+1063.941029453" lastFinishedPulling="2026-01-21 15:50:21.940624111 +0000 UTC m=+1104.302066510" observedRunningTime="2026-01-21 15:50:23.102789408 +0000 UTC m=+1105.464231817" watchObservedRunningTime="2026-01-21 15:50:23.11664206 +0000 UTC m=+1105.478084469" Jan 21 15:50:27 crc kubenswrapper[4890]: I0121 15:50:27.108150 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" event={"ID":"6dd75ffe-4c90-493d-b5af-313056532562","Type":"ContainerStarted","Data":"3404c50e11bb6cfbc8fddb9f2d984e6f1ac743a6b485947cdd3893c4a5db6e90"} Jan 21 15:50:27 crc kubenswrapper[4890]: I0121 15:50:27.108807 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:50:27 crc kubenswrapper[4890]: I0121 15:50:27.111231 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" event={"ID":"daa2bbb5-55a8-4920-9109-45bcd643bd9f","Type":"ContainerStarted","Data":"ba0ae5d74569a3a669effedc3a67d19602468091d932bfab5b6e6220a103cd6b"} Jan 21 15:50:27 crc kubenswrapper[4890]: I0121 15:50:27.111422 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:50:27 crc kubenswrapper[4890]: I0121 15:50:27.136236 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" podStartSLOduration=41.652781735 podStartE2EDuration="48.136210436s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:50:19.422007646 +0000 UTC m=+1101.783450045" lastFinishedPulling="2026-01-21 15:50:25.905436327 +0000 UTC m=+1108.266878746" observedRunningTime="2026-01-21 15:50:27.128059065 +0000 UTC m=+1109.489501474" watchObservedRunningTime="2026-01-21 15:50:27.136210436 +0000 UTC m=+1109.497652845" Jan 21 15:50:27 crc kubenswrapper[4890]: I0121 15:50:27.170237 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" podStartSLOduration=41.714486667 podStartE2EDuration="48.170217834s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:50:19.446611473 +0000 UTC m=+1101.808053882" lastFinishedPulling="2026-01-21 15:50:25.90234262 +0000 UTC m=+1108.263785049" observedRunningTime="2026-01-21 15:50:27.162153596 +0000 UTC m=+1109.523596005" watchObservedRunningTime="2026-01-21 15:50:27.170217834 +0000 UTC m=+1109.531660243" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.543991 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-5c7zf" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.567968 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f92ld" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.571597 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-l7ndf" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.607442 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h6r95" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.620671 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v7zt4" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.643465 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pqzjj" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.793254 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-xvfb4" Jan 21 15:50:29 crc kubenswrapper[4890]: I0121 15:50:29.847506 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-nfvzq" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.044944 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-wmm44" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.130163 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" event={"ID":"eeba017a-ca09-444f-a4c6-895ec31b914b","Type":"ContainerStarted","Data":"055c2c880b0c2ce7bec8a842b7fae18e0b0a2edab4f748bf330b9335047b9296"} Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.130400 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.131798 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" event={"ID":"d1fd4cb9-f562-48b3-b829-55f48fc8a414","Type":"ContainerStarted","Data":"e0e45eb81a28b710f9c4cbeefa9e02edaad66dccd4d4501c5b7b2d14ad9e5f62"} Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.132040 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.133572 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" event={"ID":"ab7d4301-6caa-4a1e-a634-e4a355271b68","Type":"ContainerStarted","Data":"81bd935212ab8a25676322f49484ef058d3bfed46c764bd9b78510a100f4066b"} Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.133740 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.153808 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" podStartSLOduration=5.716980567 podStartE2EDuration="51.153790254s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.67633508 +0000 UTC m=+1064.037777489" lastFinishedPulling="2026-01-21 15:50:27.113144767 +0000 UTC m=+1109.474587176" observedRunningTime="2026-01-21 15:50:30.14914985 +0000 UTC m=+1112.510592259" watchObservedRunningTime="2026-01-21 15:50:30.153790254 +0000 UTC m=+1112.515232663" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.169302 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" podStartSLOduration=2.971240482 podStartE2EDuration="51.169280926s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.606380095 +0000 UTC m=+1063.967822504" lastFinishedPulling="2026-01-21 15:50:29.804420539 +0000 UTC m=+1112.165862948" observedRunningTime="2026-01-21 15:50:30.163550305 +0000 UTC m=+1112.524992714" watchObservedRunningTime="2026-01-21 15:50:30.169280926 +0000 UTC m=+1112.530723335" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.184833 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" podStartSLOduration=5.889142722 podStartE2EDuration="51.184816659s" podCreationTimestamp="2026-01-21 15:49:39 +0000 UTC" firstStartedPulling="2026-01-21 15:49:41.819380147 +0000 UTC m=+1064.180822556" lastFinishedPulling="2026-01-21 15:50:27.115054094 +0000 UTC m=+1109.476496493" observedRunningTime="2026-01-21 15:50:30.179754604 +0000 UTC m=+1112.541197013" watchObservedRunningTime="2026-01-21 15:50:30.184816659 +0000 UTC m=+1112.546259068" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.275991 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nwlnf" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.374340 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9nwtw" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.539734 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-sqqwl" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.617738 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cckzf" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.618889 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-6h8pt" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.622298 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8qwg4" Jan 21 15:50:30 crc kubenswrapper[4890]: I0121 15:50:30.724195 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-x2dl5" Jan 21 15:50:31 crc kubenswrapper[4890]: I0121 15:50:31.483444 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bbwtr" Jan 21 15:50:32 crc kubenswrapper[4890]: I0121 15:50:32.582280 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-tqb8d" Jan 21 15:50:36 crc kubenswrapper[4890]: I0121 15:50:36.120937 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dzvgss" Jan 21 15:50:39 crc kubenswrapper[4890]: I0121 15:50:39.790047 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-rklqr" Jan 21 15:50:39 crc kubenswrapper[4890]: I0121 15:50:39.960836 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-jcfgv" Jan 21 15:50:40 crc kubenswrapper[4890]: I0121 15:50:40.126814 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-4lhwm" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.798821 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-7tgw4"] Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.803028 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.805374 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.805663 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qzqvl" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.806521 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.806713 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.812837 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-7tgw4"] Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.935016 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-m5gcl"] Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.943225 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.947705 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.960479 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b1307a-b350-465b-a8f6-2087f639cdaa-config\") pod \"dnsmasq-dns-84bb9d8bd9-7tgw4\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.960537 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdnx\" (UniqueName: \"kubernetes.io/projected/71b1307a-b350-465b-a8f6-2087f639cdaa-kube-api-access-gkdnx\") pod \"dnsmasq-dns-84bb9d8bd9-7tgw4\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:57 crc kubenswrapper[4890]: I0121 15:50:57.968319 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-m5gcl"] Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.061829 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b1307a-b350-465b-a8f6-2087f639cdaa-config\") pod \"dnsmasq-dns-84bb9d8bd9-7tgw4\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.062078 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkdnx\" (UniqueName: \"kubernetes.io/projected/71b1307a-b350-465b-a8f6-2087f639cdaa-kube-api-access-gkdnx\") pod \"dnsmasq-dns-84bb9d8bd9-7tgw4\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.062176 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-dns-svc\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.062271 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-config\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.062451 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbkl\" (UniqueName: \"kubernetes.io/projected/ab879f4b-e729-425b-9334-7edd77094726-kube-api-access-9vbkl\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.065101 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.074163 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b1307a-b350-465b-a8f6-2087f639cdaa-config\") pod \"dnsmasq-dns-84bb9d8bd9-7tgw4\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.089563 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.102985 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.128389 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkdnx\" (UniqueName: \"kubernetes.io/projected/71b1307a-b350-465b-a8f6-2087f639cdaa-kube-api-access-gkdnx\") pod \"dnsmasq-dns-84bb9d8bd9-7tgw4\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.164931 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-dns-svc\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.165235 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-config\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.165375 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbkl\" (UniqueName: \"kubernetes.io/projected/ab879f4b-e729-425b-9334-7edd77094726-kube-api-access-9vbkl\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.166055 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-dns-svc\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.166226 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-config\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.193066 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbkl\" (UniqueName: \"kubernetes.io/projected/ab879f4b-e729-425b-9334-7edd77094726-kube-api-access-9vbkl\") pod \"dnsmasq-dns-5f854695bc-m5gcl\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.280755 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qzqvl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.288770 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.421737 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.690403 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-7tgw4"] Jan 21 15:50:58 crc kubenswrapper[4890]: I0121 15:50:58.773034 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-m5gcl"] Jan 21 15:50:58 crc kubenswrapper[4890]: W0121 15:50:58.780144 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab879f4b_e729_425b_9334_7edd77094726.slice/crio-686eb2047ce2db8b7f239d1cbb35d0c965178f11de178df1b052bb7652f7fd81 WatchSource:0}: Error finding container 686eb2047ce2db8b7f239d1cbb35d0c965178f11de178df1b052bb7652f7fd81: Status 404 returned error can't find the container with id 686eb2047ce2db8b7f239d1cbb35d0c965178f11de178df1b052bb7652f7fd81 Jan 21 15:50:59 crc kubenswrapper[4890]: I0121 15:50:59.350855 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" event={"ID":"71b1307a-b350-465b-a8f6-2087f639cdaa","Type":"ContainerStarted","Data":"e5ece9c611ae8621e75281d862e2a46d777feb40c18ac6bc5b501937581cc0a1"} Jan 21 15:50:59 crc kubenswrapper[4890]: I0121 15:50:59.352110 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" event={"ID":"ab879f4b-e729-425b-9334-7edd77094726","Type":"ContainerStarted","Data":"686eb2047ce2db8b7f239d1cbb35d0c965178f11de178df1b052bb7652f7fd81"} Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.418243 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-m5gcl"] Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.450071 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-crcn7"] Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.451207 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.475827 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-crcn7"] Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.600068 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.600144 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvm7\" (UniqueName: \"kubernetes.io/projected/48e6547a-641f-46ba-9d0b-977d59a8f401-kube-api-access-rlvm7\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.600178 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-config\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.702215 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.702302 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvm7\" (UniqueName: \"kubernetes.io/projected/48e6547a-641f-46ba-9d0b-977d59a8f401-kube-api-access-rlvm7\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.702341 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-config\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.703367 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.703507 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-config\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.725864 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvm7\" (UniqueName: \"kubernetes.io/projected/48e6547a-641f-46ba-9d0b-977d59a8f401-kube-api-access-rlvm7\") pod \"dnsmasq-dns-744ffd65bc-crcn7\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.768449 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.865849 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-7tgw4"] Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.883541 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-tzkm7"] Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.894377 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-tzkm7"] Jan 21 15:51:00 crc kubenswrapper[4890]: I0121 15:51:00.894501 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.023467 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82p9g\" (UniqueName: \"kubernetes.io/projected/fd648b5d-a2fc-4618-bef7-612e7593065f-kube-api-access-82p9g\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.023655 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-config\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.023701 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-dns-svc\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.125132 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82p9g\" (UniqueName: \"kubernetes.io/projected/fd648b5d-a2fc-4618-bef7-612e7593065f-kube-api-access-82p9g\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.125259 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-config\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.125287 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-dns-svc\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.126219 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-dns-svc\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.127176 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-config\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.175924 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82p9g\" (UniqueName: \"kubernetes.io/projected/fd648b5d-a2fc-4618-bef7-612e7593065f-kube-api-access-82p9g\") pod \"dnsmasq-dns-95f5f6995-tzkm7\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.270904 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.489241 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-crcn7"] Jan 21 15:51:01 crc kubenswrapper[4890]: W0121 15:51:01.512037 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48e6547a_641f_46ba_9d0b_977d59a8f401.slice/crio-35ba3d7c78a6960522b70197cce780584b7162ef76e36be2001bfd1ea5991e41 WatchSource:0}: Error finding container 35ba3d7c78a6960522b70197cce780584b7162ef76e36be2001bfd1ea5991e41: Status 404 returned error can't find the container with id 35ba3d7c78a6960522b70197cce780584b7162ef76e36be2001bfd1ea5991e41 Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.636802 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.638332 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.640832 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.643903 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-f4d2q" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.644056 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.644226 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.644551 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.644752 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.644939 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.658334 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744373 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744434 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744471 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744499 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744528 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744572 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caae7093-b594-47fb-b863-38d825f0048d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744601 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744643 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rpcj\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-kube-api-access-5rpcj\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744673 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caae7093-b594-47fb-b863-38d825f0048d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744712 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.744738 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.804392 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-tzkm7"] Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846658 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caae7093-b594-47fb-b863-38d825f0048d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846740 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846797 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpcj\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-kube-api-access-5rpcj\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846819 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caae7093-b594-47fb-b863-38d825f0048d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846855 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846878 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846907 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846935 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846953 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846974 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.846994 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.847804 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.848562 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.849436 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.849905 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.850020 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.850221 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.854700 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.857591 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caae7093-b594-47fb-b863-38d825f0048d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.859230 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.866211 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caae7093-b594-47fb-b863-38d825f0048d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.870185 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpcj\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-kube-api-access-5rpcj\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.885132 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:01 crc kubenswrapper[4890]: I0121 15:51:01.970164 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.037388 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.038850 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.044131 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.044905 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.044977 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.045368 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rnwj9" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.045414 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.045649 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.046075 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.060629 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159721 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159801 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ss4g\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-kube-api-access-8ss4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159824 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bb9aa52-0895-418e-8e0b-d922948e85a7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159846 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159870 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159888 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159907 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159927 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bb9aa52-0895-418e-8e0b-d922948e85a7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159948 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.159978 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.160001 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261669 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261734 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261808 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261875 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ss4g\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-kube-api-access-8ss4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261898 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bb9aa52-0895-418e-8e0b-d922948e85a7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261917 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261945 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261962 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261978 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.261996 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bb9aa52-0895-418e-8e0b-d922948e85a7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.262020 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.262804 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.263618 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.263672 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.263811 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.264346 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.264775 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.269112 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.270099 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.271308 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bb9aa52-0895-418e-8e0b-d922948e85a7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.272893 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bb9aa52-0895-418e-8e0b-d922948e85a7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.284891 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ss4g\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-kube-api-access-8ss4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.293888 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.380906 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.385381 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" event={"ID":"48e6547a-641f-46ba-9d0b-977d59a8f401","Type":"ContainerStarted","Data":"35ba3d7c78a6960522b70197cce780584b7162ef76e36be2001bfd1ea5991e41"} Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.386683 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" event={"ID":"fd648b5d-a2fc-4618-bef7-612e7593065f","Type":"ContainerStarted","Data":"04dd653f97dc289a84cad1c1b583a184e6481a4dae2890ccf7a9210a44fd01c4"} Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.551475 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:02 crc kubenswrapper[4890]: I0121 15:51:02.833908 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:02 crc kubenswrapper[4890]: W0121 15:51:02.841159 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bb9aa52_0895_418e_8e0b_d922948e85a7.slice/crio-12fe1f941cc500ce6c437416684b64243a759e1f17e92c4153cbf5a6bc326057 WatchSource:0}: Error finding container 12fe1f941cc500ce6c437416684b64243a759e1f17e92c4153cbf5a6bc326057: Status 404 returned error can't find the container with id 12fe1f941cc500ce6c437416684b64243a759e1f17e92c4153cbf5a6bc326057 Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.167006 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.168418 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.177156 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.177730 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.178025 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.178162 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hsd58" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.181479 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.208830 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.282593 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-default\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.282712 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.282769 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.282872 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqfp6\" (UniqueName: \"kubernetes.io/projected/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kube-api-access-gqfp6\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.282998 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.283109 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kolla-config\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.283172 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.283554 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385440 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385584 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-default\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385673 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385717 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385763 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqfp6\" (UniqueName: \"kubernetes.io/projected/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kube-api-access-gqfp6\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385805 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385852 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kolla-config\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.385879 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.388459 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.389473 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kolla-config\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.389557 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.391450 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.397033 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.398827 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.411089 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqfp6\" (UniqueName: \"kubernetes.io/projected/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kube-api-access-gqfp6\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.411534 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-default\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.421466 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " pod="openstack/openstack-galera-0" Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.436375 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9bb9aa52-0895-418e-8e0b-d922948e85a7","Type":"ContainerStarted","Data":"12fe1f941cc500ce6c437416684b64243a759e1f17e92c4153cbf5a6bc326057"} Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.438861 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caae7093-b594-47fb-b863-38d825f0048d","Type":"ContainerStarted","Data":"f245e1fe5f5f6bde901fa2a6994facf2644276f4ebfc8d6b57ea64380d885c7a"} Jan 21 15:51:03 crc kubenswrapper[4890]: I0121 15:51:03.506604 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.122118 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.458685 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cc7a8eb5-11e0-4603-b80a-3b4f6e724770","Type":"ContainerStarted","Data":"b99c38f26ca2e62d9f5bb7864e021dad61fbc01aeb3cca6b4c9de2f33837adac"} Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.794950 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.796276 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.805052 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-cj8bt" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.813324 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.821902 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.822109 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.822201 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826581 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826635 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826663 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826691 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq8rp\" (UniqueName: \"kubernetes.io/projected/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kube-api-access-zq8rp\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826725 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826748 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826788 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.826813 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.929606 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.930140 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.930166 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.930201 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq8rp\" (UniqueName: \"kubernetes.io/projected/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kube-api-access-zq8rp\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.930259 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.930284 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.930335 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.930411 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.932031 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.933148 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.936910 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.938493 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.939060 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.969583 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.974941 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq8rp\" (UniqueName: \"kubernetes.io/projected/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kube-api-access-zq8rp\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.975120 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:04 crc kubenswrapper[4890]: I0121 15:51:04.977742 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.006313 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.007539 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.014275 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.014540 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.014569 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-s6zqc" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.021966 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.137922 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-config-data\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.137995 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9jmm\" (UniqueName: \"kubernetes.io/projected/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kube-api-access-q9jmm\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.138051 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.138128 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kolla-config\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.138154 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.156821 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.240205 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kolla-config\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.240340 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.243906 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-config-data\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.241093 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kolla-config\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.243991 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9jmm\" (UniqueName: \"kubernetes.io/projected/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kube-api-access-q9jmm\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.244368 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.249104 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.249769 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-config-data\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.262252 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.263301 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9jmm\" (UniqueName: \"kubernetes.io/projected/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kube-api-access-q9jmm\") pod \"memcached-0\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.365438 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.817653 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:51:05 crc kubenswrapper[4890]: I0121 15:51:05.859193 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.005300 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.007370 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.017966 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-j25rq" Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.037136 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.091926 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zr52\" (UniqueName: \"kubernetes.io/projected/bd6174ac-18d9-49c7-9c25-ad75ca3a2d97-kube-api-access-9zr52\") pod \"kube-state-metrics-0\" (UID: \"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97\") " pod="openstack/kube-state-metrics-0" Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.196259 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zr52\" (UniqueName: \"kubernetes.io/projected/bd6174ac-18d9-49c7-9c25-ad75ca3a2d97-kube-api-access-9zr52\") pod \"kube-state-metrics-0\" (UID: \"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97\") " pod="openstack/kube-state-metrics-0" Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.245363 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zr52\" (UniqueName: \"kubernetes.io/projected/bd6174ac-18d9-49c7-9c25-ad75ca3a2d97-kube-api-access-9zr52\") pod \"kube-state-metrics-0\" (UID: \"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97\") " pod="openstack/kube-state-metrics-0" Jan 21 15:51:07 crc kubenswrapper[4890]: I0121 15:51:07.338605 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.911428 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pmrch"] Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.912873 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch" Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.916043 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.916749 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-94j6z" Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.916986 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.938676 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dfk6x"] Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.940970 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.944934 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dfk6x"] Jan 21 15:51:09 crc kubenswrapper[4890]: I0121 15:51:09.950328 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pmrch"] Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060291 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run-ovn\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060400 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-lib\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060492 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-log\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060518 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060547 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbk87\" (UniqueName: \"kubernetes.io/projected/cdd2d089-a1a5-4e25-920a-a485d0fd319f-kube-api-access-qbk87\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060593 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-etc-ovs\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060616 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-combined-ca-bundle\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060667 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233162f3-fe28-4476-bc40-eb4b138ae68a-scripts\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060710 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdd2d089-a1a5-4e25-920a-a485d0fd319f-scripts\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060740 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-log-ovn\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060761 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-ovn-controller-tls-certs\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060809 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58pt\" (UniqueName: \"kubernetes.io/projected/233162f3-fe28-4476-bc40-eb4b138ae68a-kube-api-access-j58pt\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.060835 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-run\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162532 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-etc-ovs\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162604 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-combined-ca-bundle\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162652 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233162f3-fe28-4476-bc40-eb4b138ae68a-scripts\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162701 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdd2d089-a1a5-4e25-920a-a485d0fd319f-scripts\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162729 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-log-ovn\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162751 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-ovn-controller-tls-certs\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162799 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58pt\" (UniqueName: \"kubernetes.io/projected/233162f3-fe28-4476-bc40-eb4b138ae68a-kube-api-access-j58pt\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162842 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-run\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162891 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run-ovn\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162923 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-lib\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.162981 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-log\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.163013 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.163037 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbk87\" (UniqueName: \"kubernetes.io/projected/cdd2d089-a1a5-4e25-920a-a485d0fd319f-kube-api-access-qbk87\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.163326 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-etc-ovs\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.163507 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-log-ovn\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.163828 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-lib\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.164663 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-log\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.165175 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233162f3-fe28-4476-bc40-eb4b138ae68a-scripts\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.165695 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdd2d089-a1a5-4e25-920a-a485d0fd319f-scripts\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.169488 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-combined-ca-bundle\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.172156 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-ovn-controller-tls-certs\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.172595 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run-ovn\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.173693 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-run\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.173695 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.183998 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbk87\" (UniqueName: \"kubernetes.io/projected/cdd2d089-a1a5-4e25-920a-a485d0fd319f-kube-api-access-qbk87\") pod \"ovn-controller-pmrch\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.184188 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58pt\" (UniqueName: \"kubernetes.io/projected/233162f3-fe28-4476-bc40-eb4b138ae68a-kube-api-access-j58pt\") pod \"ovn-controller-ovs-dfk6x\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.242663 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch" Jan 21 15:51:10 crc kubenswrapper[4890]: I0121 15:51:10.273872 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.144134 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.148728 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.158067 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.158310 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.159386 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.159628 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.159790 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4bhpz" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.161553 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197043 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197146 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197192 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197276 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197309 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197676 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h548d\" (UniqueName: \"kubernetes.io/projected/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-kube-api-access-h548d\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197884 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-config\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.197984 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.298982 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.299030 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.299068 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.299088 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.299113 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.299165 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h548d\" (UniqueName: \"kubernetes.io/projected/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-kube-api-access-h548d\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.299197 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-config\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.299217 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.300739 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.300792 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-config\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.300994 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.301184 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.304577 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.310019 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.317786 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.327726 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h548d\" (UniqueName: \"kubernetes.io/projected/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-kube-api-access-h548d\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.342611 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:12 crc kubenswrapper[4890]: I0121 15:51:12.477050 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:13 crc kubenswrapper[4890]: I0121 15:51:13.590549 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91b03ee9-0cb8-49eb-b3da-3d1c42e15720","Type":"ContainerStarted","Data":"75966d5f0fc4cbeafe8e22e665477e5daae59c68c111d960daa8e1678d776b2c"} Jan 21 15:51:13 crc kubenswrapper[4890]: I0121 15:51:13.591819 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"770a4f11-b2a3-46fd-a06d-3af27edd3d9f","Type":"ContainerStarted","Data":"13309ead34de9ffa9030777afa7d73bcbd20e6dd5d99e7fd9bac7506ced9f198"} Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.405720 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.408123 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.412889 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.412917 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-c7894" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.413189 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.413412 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.423755 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439063 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439119 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqw9b\" (UniqueName: \"kubernetes.io/projected/3ab783d9-382b-4b61-85f0-f4a82160effe-kube-api-access-vqw9b\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439244 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439315 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439384 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439450 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439521 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.439683 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-config\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555314 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555420 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555479 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-config\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555510 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555540 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqw9b\" (UniqueName: \"kubernetes.io/projected/3ab783d9-382b-4b61-85f0-f4a82160effe-kube-api-access-vqw9b\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555576 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555620 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555647 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.555730 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.556591 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.557050 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.557234 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-config\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.564943 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.571840 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.575475 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqw9b\" (UniqueName: \"kubernetes.io/projected/3ab783d9-382b-4b61-85f0-f4a82160effe-kube-api-access-vqw9b\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.577142 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.578859 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:14 crc kubenswrapper[4890]: I0121 15:51:14.738096 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:30 crc kubenswrapper[4890]: E0121 15:51:30.746456 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 15:51:30 crc kubenswrapper[4890]: E0121 15:51:30.747081 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ss4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9bb9aa52-0895-418e-8e0b-d922948e85a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:51:30 crc kubenswrapper[4890]: E0121 15:51:30.748556 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" Jan 21 15:51:30 crc kubenswrapper[4890]: E0121 15:51:30.769291 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 15:51:30 crc kubenswrapper[4890]: E0121 15:51:30.769518 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rpcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(caae7093-b594-47fb-b863-38d825f0048d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:51:30 crc kubenswrapper[4890]: E0121 15:51:30.770696 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="caae7093-b594-47fb-b863-38d825f0048d" Jan 21 15:51:31 crc kubenswrapper[4890]: I0121 15:51:31.265838 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:51:31 crc kubenswrapper[4890]: E0121 15:51:31.726115 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="caae7093-b594-47fb-b863-38d825f0048d" Jan 21 15:51:31 crc kubenswrapper[4890]: E0121 15:51:31.726242 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" Jan 21 15:51:32 crc kubenswrapper[4890]: E0121 15:51:32.490177 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 21 15:51:32 crc kubenswrapper[4890]: E0121 15:51:32.490381 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqfp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(cc7a8eb5-11e0-4603-b80a-3b4f6e724770): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:51:32 crc kubenswrapper[4890]: E0121 15:51:32.491549 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" Jan 21 15:51:32 crc kubenswrapper[4890]: E0121 15:51:32.737374 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.224875 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.225700 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vbkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-m5gcl_openstack(ab879f4b-e729-425b-9334-7edd77094726): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.227705 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" podUID="ab879f4b-e729-425b-9334-7edd77094726" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.233131 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.233289 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82p9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-tzkm7_openstack(fd648b5d-a2fc-4618-bef7-612e7593065f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.234669 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" podUID="fd648b5d-a2fc-4618-bef7-612e7593065f" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.249006 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.249172 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkdnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-7tgw4_openstack(71b1307a-b350-465b-a8f6-2087f639cdaa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.250490 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" podUID="71b1307a-b350-465b-a8f6-2087f639cdaa" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.252768 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.252926 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlvm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-crcn7_openstack(48e6547a-641f-46ba-9d0b-977d59a8f401): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.256146 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" podUID="48e6547a-641f-46ba-9d0b-977d59a8f401" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.745093 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" podUID="fd648b5d-a2fc-4618-bef7-612e7593065f" Jan 21 15:51:33 crc kubenswrapper[4890]: E0121 15:51:33.745476 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" podUID="48e6547a-641f-46ba-9d0b-977d59a8f401" Jan 21 15:51:33 crc kubenswrapper[4890]: W0121 15:51:33.949735 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e21e0c9_91df_4f87_a32f_30fa3d3fa874.slice/crio-5390a6179693f6250b519db66dffb623100123c375a8f1fc8ae8873748d538c9 WatchSource:0}: Error finding container 5390a6179693f6250b519db66dffb623100123c375a8f1fc8ae8873748d538c9: Status 404 returned error can't find the container with id 5390a6179693f6250b519db66dffb623100123c375a8f1fc8ae8873748d538c9 Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.094151 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.100749 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.210278 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-config\") pod \"ab879f4b-e729-425b-9334-7edd77094726\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.210381 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkdnx\" (UniqueName: \"kubernetes.io/projected/71b1307a-b350-465b-a8f6-2087f639cdaa-kube-api-access-gkdnx\") pod \"71b1307a-b350-465b-a8f6-2087f639cdaa\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.210463 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vbkl\" (UniqueName: \"kubernetes.io/projected/ab879f4b-e729-425b-9334-7edd77094726-kube-api-access-9vbkl\") pod \"ab879f4b-e729-425b-9334-7edd77094726\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.210534 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b1307a-b350-465b-a8f6-2087f639cdaa-config\") pod \"71b1307a-b350-465b-a8f6-2087f639cdaa\" (UID: \"71b1307a-b350-465b-a8f6-2087f639cdaa\") " Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.210552 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-dns-svc\") pod \"ab879f4b-e729-425b-9334-7edd77094726\" (UID: \"ab879f4b-e729-425b-9334-7edd77094726\") " Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.211264 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab879f4b-e729-425b-9334-7edd77094726" (UID: "ab879f4b-e729-425b-9334-7edd77094726"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.211994 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-config" (OuterVolumeSpecName: "config") pod "ab879f4b-e729-425b-9334-7edd77094726" (UID: "ab879f4b-e729-425b-9334-7edd77094726"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.212498 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b1307a-b350-465b-a8f6-2087f639cdaa-config" (OuterVolumeSpecName: "config") pod "71b1307a-b350-465b-a8f6-2087f639cdaa" (UID: "71b1307a-b350-465b-a8f6-2087f639cdaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.216091 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b1307a-b350-465b-a8f6-2087f639cdaa-kube-api-access-gkdnx" (OuterVolumeSpecName: "kube-api-access-gkdnx") pod "71b1307a-b350-465b-a8f6-2087f639cdaa" (UID: "71b1307a-b350-465b-a8f6-2087f639cdaa"). InnerVolumeSpecName "kube-api-access-gkdnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.216738 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab879f4b-e729-425b-9334-7edd77094726-kube-api-access-9vbkl" (OuterVolumeSpecName: "kube-api-access-9vbkl") pod "ab879f4b-e729-425b-9334-7edd77094726" (UID: "ab879f4b-e729-425b-9334-7edd77094726"). InnerVolumeSpecName "kube-api-access-9vbkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.312325 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.312596 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab879f4b-e729-425b-9334-7edd77094726-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.312608 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkdnx\" (UniqueName: \"kubernetes.io/projected/71b1307a-b350-465b-a8f6-2087f639cdaa-kube-api-access-gkdnx\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.312619 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vbkl\" (UniqueName: \"kubernetes.io/projected/ab879f4b-e729-425b-9334-7edd77094726-kube-api-access-9vbkl\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.312629 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b1307a-b350-465b-a8f6-2087f639cdaa-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.554891 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.624756 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dfk6x"] Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.637265 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pmrch"] Jan 21 15:51:34 crc kubenswrapper[4890]: W0121 15:51:34.653328 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod233162f3_fe28_4476_bc40_eb4b138ae68a.slice/crio-52b1f6dfa2942f85834acf1faaac5170191479c15990d3f6453b2be4099fb535 WatchSource:0}: Error finding container 52b1f6dfa2942f85834acf1faaac5170191479c15990d3f6453b2be4099fb535: Status 404 returned error can't find the container with id 52b1f6dfa2942f85834acf1faaac5170191479c15990d3f6453b2be4099fb535 Jan 21 15:51:34 crc kubenswrapper[4890]: W0121 15:51:34.662613 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdd2d089_a1a5_4e25_920a_a485d0fd319f.slice/crio-2d4edbfe177c7ef30092237586acb85285f53fc0a2d9ab2be6f550b1b2daf014 WatchSource:0}: Error finding container 2d4edbfe177c7ef30092237586acb85285f53fc0a2d9ab2be6f550b1b2daf014: Status 404 returned error can't find the container with id 2d4edbfe177c7ef30092237586acb85285f53fc0a2d9ab2be6f550b1b2daf014 Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.738329 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:51:34 crc kubenswrapper[4890]: W0121 15:51:34.745028 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab783d9_382b_4b61_85f0_f4a82160effe.slice/crio-b9855ec1171a6828e4b84ce1c79fef544afce75c5d28ca7615c5af5eaf77ad1e WatchSource:0}: Error finding container b9855ec1171a6828e4b84ce1c79fef544afce75c5d28ca7615c5af5eaf77ad1e: Status 404 returned error can't find the container with id b9855ec1171a6828e4b84ce1c79fef544afce75c5d28ca7615c5af5eaf77ad1e Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.761992 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"770a4f11-b2a3-46fd-a06d-3af27edd3d9f","Type":"ContainerStarted","Data":"58a6039e13c8c21e15265060210fffee78adef95926968b02f31c6424dc6e4e2"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.762127 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.764676 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch" event={"ID":"cdd2d089-a1a5-4e25-920a-a485d0fd319f","Type":"ContainerStarted","Data":"2d4edbfe177c7ef30092237586acb85285f53fc0a2d9ab2be6f550b1b2daf014"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.766156 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" event={"ID":"71b1307a-b350-465b-a8f6-2087f639cdaa","Type":"ContainerDied","Data":"e5ece9c611ae8621e75281d862e2a46d777feb40c18ac6bc5b501937581cc0a1"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.766225 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-7tgw4" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.769534 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97","Type":"ContainerStarted","Data":"88eaead50e63f3ba605c0cd3876e21faf17e07bf8f62b620e0ba5db7ae9235db"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.774292 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91b03ee9-0cb8-49eb-b3da-3d1c42e15720","Type":"ContainerStarted","Data":"81766ea3441972edff851d9a6f741a258ff1e9c431c8076fa7bee5bc3e0d1416"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.787708 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" event={"ID":"ab879f4b-e729-425b-9334-7edd77094726","Type":"ContainerDied","Data":"686eb2047ce2db8b7f239d1cbb35d0c965178f11de178df1b052bb7652f7fd81"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.787803 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-m5gcl" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.793901 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=10.117494326 podStartE2EDuration="30.793878674s" podCreationTimestamp="2026-01-21 15:51:04 +0000 UTC" firstStartedPulling="2026-01-21 15:51:13.407441878 +0000 UTC m=+1155.768884287" lastFinishedPulling="2026-01-21 15:51:34.083826226 +0000 UTC m=+1176.445268635" observedRunningTime="2026-01-21 15:51:34.787218258 +0000 UTC m=+1177.148660667" watchObservedRunningTime="2026-01-21 15:51:34.793878674 +0000 UTC m=+1177.155321073" Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.795009 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dfk6x" event={"ID":"233162f3-fe28-4476-bc40-eb4b138ae68a","Type":"ContainerStarted","Data":"52b1f6dfa2942f85834acf1faaac5170191479c15990d3f6453b2be4099fb535"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.796437 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ab783d9-382b-4b61-85f0-f4a82160effe","Type":"ContainerStarted","Data":"b9855ec1171a6828e4b84ce1c79fef544afce75c5d28ca7615c5af5eaf77ad1e"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.797659 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e21e0c9-91df-4f87-a32f-30fa3d3fa874","Type":"ContainerStarted","Data":"5390a6179693f6250b519db66dffb623100123c375a8f1fc8ae8873748d538c9"} Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.859426 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-m5gcl"] Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.863709 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-m5gcl"] Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.886728 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-7tgw4"] Jan 21 15:51:34 crc kubenswrapper[4890]: I0121 15:51:34.898402 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-7tgw4"] Jan 21 15:51:35 crc kubenswrapper[4890]: I0121 15:51:35.928886 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b1307a-b350-465b-a8f6-2087f639cdaa" path="/var/lib/kubelet/pods/71b1307a-b350-465b-a8f6-2087f639cdaa/volumes" Jan 21 15:51:35 crc kubenswrapper[4890]: I0121 15:51:35.929417 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab879f4b-e729-425b-9334-7edd77094726" path="/var/lib/kubelet/pods/ab879f4b-e729-425b-9334-7edd77094726/volumes" Jan 21 15:51:38 crc kubenswrapper[4890]: I0121 15:51:38.839306 4890 generic.go:334] "Generic (PLEG): container finished" podID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerID="81766ea3441972edff851d9a6f741a258ff1e9c431c8076fa7bee5bc3e0d1416" exitCode=0 Jan 21 15:51:38 crc kubenswrapper[4890]: I0121 15:51:38.839412 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91b03ee9-0cb8-49eb-b3da-3d1c42e15720","Type":"ContainerDied","Data":"81766ea3441972edff851d9a6f741a258ff1e9c431c8076fa7bee5bc3e0d1416"} Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.847925 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch" event={"ID":"cdd2d089-a1a5-4e25-920a-a485d0fd319f","Type":"ContainerStarted","Data":"fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9"} Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.848551 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pmrch" Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.850780 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e21e0c9-91df-4f87-a32f-30fa3d3fa874","Type":"ContainerStarted","Data":"b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0"} Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.853085 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97","Type":"ContainerStarted","Data":"b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098"} Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.853202 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.854921 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91b03ee9-0cb8-49eb-b3da-3d1c42e15720","Type":"ContainerStarted","Data":"ee8636883cf7ef685bc793e2761b19d6a77deb5c7898b985a0cc704d99683d91"} Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.856440 4890 generic.go:334] "Generic (PLEG): container finished" podID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerID="3575042dd8f2422aba0e5359772f4de6498b60c970bb53645ccc0512d6212730" exitCode=0 Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.856501 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dfk6x" event={"ID":"233162f3-fe28-4476-bc40-eb4b138ae68a","Type":"ContainerDied","Data":"3575042dd8f2422aba0e5359772f4de6498b60c970bb53645ccc0512d6212730"} Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.858969 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ab783d9-382b-4b61-85f0-f4a82160effe","Type":"ContainerStarted","Data":"7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09"} Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.874740 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pmrch" podStartSLOduration=26.546889407 podStartE2EDuration="30.874721956s" podCreationTimestamp="2026-01-21 15:51:09 +0000 UTC" firstStartedPulling="2026-01-21 15:51:34.667540521 +0000 UTC m=+1177.028982920" lastFinishedPulling="2026-01-21 15:51:38.99537306 +0000 UTC m=+1181.356815469" observedRunningTime="2026-01-21 15:51:39.867313702 +0000 UTC m=+1182.228756121" watchObservedRunningTime="2026-01-21 15:51:39.874721956 +0000 UTC m=+1182.236164365" Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.898396 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.224844935 podStartE2EDuration="36.898251089s" podCreationTimestamp="2026-01-21 15:51:03 +0000 UTC" firstStartedPulling="2026-01-21 15:51:13.407402527 +0000 UTC m=+1155.768844936" lastFinishedPulling="2026-01-21 15:51:34.080808681 +0000 UTC m=+1176.442251090" observedRunningTime="2026-01-21 15:51:39.887634556 +0000 UTC m=+1182.249076985" watchObservedRunningTime="2026-01-21 15:51:39.898251089 +0000 UTC m=+1182.259693488" Jan 21 15:51:39 crc kubenswrapper[4890]: I0121 15:51:39.913008 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=29.307638754 podStartE2EDuration="33.912987225s" podCreationTimestamp="2026-01-21 15:51:06 +0000 UTC" firstStartedPulling="2026-01-21 15:51:34.571779186 +0000 UTC m=+1176.933221595" lastFinishedPulling="2026-01-21 15:51:39.177127657 +0000 UTC m=+1181.538570066" observedRunningTime="2026-01-21 15:51:39.908400681 +0000 UTC m=+1182.269843090" watchObservedRunningTime="2026-01-21 15:51:39.912987225 +0000 UTC m=+1182.274429634" Jan 21 15:51:40 crc kubenswrapper[4890]: I0121 15:51:40.366966 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 15:51:40 crc kubenswrapper[4890]: I0121 15:51:40.879425 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dfk6x" event={"ID":"233162f3-fe28-4476-bc40-eb4b138ae68a","Type":"ContainerStarted","Data":"3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c"} Jan 21 15:51:40 crc kubenswrapper[4890]: I0121 15:51:40.879473 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dfk6x" event={"ID":"233162f3-fe28-4476-bc40-eb4b138ae68a","Type":"ContainerStarted","Data":"283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f"} Jan 21 15:51:40 crc kubenswrapper[4890]: I0121 15:51:40.879965 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:40 crc kubenswrapper[4890]: I0121 15:51:40.880038 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:51:40 crc kubenswrapper[4890]: I0121 15:51:40.902605 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dfk6x" podStartSLOduration=28.08190508 podStartE2EDuration="31.902585253s" podCreationTimestamp="2026-01-21 15:51:09 +0000 UTC" firstStartedPulling="2026-01-21 15:51:34.65541692 +0000 UTC m=+1177.016859329" lastFinishedPulling="2026-01-21 15:51:38.476097093 +0000 UTC m=+1180.837539502" observedRunningTime="2026-01-21 15:51:40.897741153 +0000 UTC m=+1183.259183562" watchObservedRunningTime="2026-01-21 15:51:40.902585253 +0000 UTC m=+1183.264027672" Jan 21 15:51:43 crc kubenswrapper[4890]: I0121 15:51:43.902093 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ab783d9-382b-4b61-85f0-f4a82160effe","Type":"ContainerStarted","Data":"f816fbeb470ee262ad039181a4ae9efe8ea0d75924ce11d2ac8682922df4c451"} Jan 21 15:51:43 crc kubenswrapper[4890]: I0121 15:51:43.903773 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cc7a8eb5-11e0-4603-b80a-3b4f6e724770","Type":"ContainerStarted","Data":"2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd"} Jan 21 15:51:43 crc kubenswrapper[4890]: I0121 15:51:43.905844 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e21e0c9-91df-4f87-a32f-30fa3d3fa874","Type":"ContainerStarted","Data":"9c227e45c94f7742e46f8728f499fa534251a81e5033658fec415f426bd7319e"} Jan 21 15:51:43 crc kubenswrapper[4890]: I0121 15:51:43.925593 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.938772573 podStartE2EDuration="30.925572345s" podCreationTimestamp="2026-01-21 15:51:13 +0000 UTC" firstStartedPulling="2026-01-21 15:51:34.746818777 +0000 UTC m=+1177.108261186" lastFinishedPulling="2026-01-21 15:51:42.733618549 +0000 UTC m=+1185.095060958" observedRunningTime="2026-01-21 15:51:43.922760096 +0000 UTC m=+1186.284202505" watchObservedRunningTime="2026-01-21 15:51:43.925572345 +0000 UTC m=+1186.287014754" Jan 21 15:51:43 crc kubenswrapper[4890]: I0121 15:51:43.963882 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=24.234460518 podStartE2EDuration="32.963858485s" podCreationTimestamp="2026-01-21 15:51:11 +0000 UTC" firstStartedPulling="2026-01-21 15:51:33.994634324 +0000 UTC m=+1176.356076733" lastFinishedPulling="2026-01-21 15:51:42.724032291 +0000 UTC m=+1185.085474700" observedRunningTime="2026-01-21 15:51:43.955522298 +0000 UTC m=+1186.316964707" watchObservedRunningTime="2026-01-21 15:51:43.963858485 +0000 UTC m=+1186.325300904" Jan 21 15:51:44 crc kubenswrapper[4890]: I0121 15:51:44.739036 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:44 crc kubenswrapper[4890]: I0121 15:51:44.739088 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:44 crc kubenswrapper[4890]: I0121 15:51:44.783966 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:44 crc kubenswrapper[4890]: I0121 15:51:44.913542 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9bb9aa52-0895-418e-8e0b-d922948e85a7","Type":"ContainerStarted","Data":"33df5a7bef461b044f5948e6d82a92f50152c168d4b306d9e90252ca8c70cd02"} Jan 21 15:51:44 crc kubenswrapper[4890]: I0121 15:51:44.915911 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caae7093-b594-47fb-b863-38d825f0048d","Type":"ContainerStarted","Data":"f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66"} Jan 21 15:51:44 crc kubenswrapper[4890]: I0121 15:51:44.966058 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.158656 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.159011 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.228676 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-crcn7"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.267830 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5wh28"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.269166 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.274548 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.291795 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-c4t69"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.293418 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.314473 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5wh28"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.314798 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.321797 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-c4t69"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.327629 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439693 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovs-rundir\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439730 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8b8\" (UniqueName: \"kubernetes.io/projected/57d2ee81-accb-4ff7-8fa6-52ed7d728258-kube-api-access-hn8b8\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439752 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d2ee81-accb-4ff7-8fa6-52ed7d728258-config\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439804 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-config\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439821 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-combined-ca-bundle\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439852 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovn-rundir\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439935 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.439961 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.440033 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5lw\" (UniqueName: \"kubernetes.io/projected/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-kube-api-access-bp5lw\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.440066 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.477290 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544341 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-config\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544407 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-combined-ca-bundle\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544448 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovn-rundir\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544481 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544506 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544531 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5lw\" (UniqueName: \"kubernetes.io/projected/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-kube-api-access-bp5lw\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544564 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544625 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovs-rundir\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544657 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8b8\" (UniqueName: \"kubernetes.io/projected/57d2ee81-accb-4ff7-8fa6-52ed7d728258-kube-api-access-hn8b8\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.544675 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d2ee81-accb-4ff7-8fa6-52ed7d728258-config\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.545246 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovs-rundir\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.545322 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovn-rundir\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.545421 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-config\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.545502 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d2ee81-accb-4ff7-8fa6-52ed7d728258-config\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.546427 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.546531 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.567002 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.572871 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.583417 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8b8\" (UniqueName: \"kubernetes.io/projected/57d2ee81-accb-4ff7-8fa6-52ed7d728258-kube-api-access-hn8b8\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.583977 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-combined-ca-bundle\") pod \"ovn-controller-metrics-5wh28\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.596193 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5lw\" (UniqueName: \"kubernetes.io/projected/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-kube-api-access-bp5lw\") pod \"dnsmasq-dns-7bbdc7ccd7-c4t69\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.634605 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.647513 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.735645 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-tzkm7"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.760293 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.769082 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-6m4nj"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.770329 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.772945 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.831421 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-6m4nj"] Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.858567 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-config\") pod \"48e6547a-641f-46ba-9d0b-977d59a8f401\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.858735 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-dns-svc\") pod \"48e6547a-641f-46ba-9d0b-977d59a8f401\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.858782 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlvm7\" (UniqueName: \"kubernetes.io/projected/48e6547a-641f-46ba-9d0b-977d59a8f401-kube-api-access-rlvm7\") pod \"48e6547a-641f-46ba-9d0b-977d59a8f401\" (UID: \"48e6547a-641f-46ba-9d0b-977d59a8f401\") " Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.859201 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-config" (OuterVolumeSpecName: "config") pod "48e6547a-641f-46ba-9d0b-977d59a8f401" (UID: "48e6547a-641f-46ba-9d0b-977d59a8f401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.859722 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48e6547a-641f-46ba-9d0b-977d59a8f401" (UID: "48e6547a-641f-46ba-9d0b-977d59a8f401"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.859890 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-config\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.860035 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.860090 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.860133 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2drg\" (UniqueName: \"kubernetes.io/projected/501304ef-40ab-490a-8df7-77f1804b4f80-kube-api-access-r2drg\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.860189 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.860298 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.860315 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48e6547a-641f-46ba-9d0b-977d59a8f401-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.864971 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e6547a-641f-46ba-9d0b-977d59a8f401-kube-api-access-rlvm7" (OuterVolumeSpecName: "kube-api-access-rlvm7") pod "48e6547a-641f-46ba-9d0b-977d59a8f401" (UID: "48e6547a-641f-46ba-9d0b-977d59a8f401"). InnerVolumeSpecName "kube-api-access-rlvm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.948664 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.950473 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-crcn7" event={"ID":"48e6547a-641f-46ba-9d0b-977d59a8f401","Type":"ContainerDied","Data":"35ba3d7c78a6960522b70197cce780584b7162ef76e36be2001bfd1ea5991e41"} Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.950511 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.961830 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.961919 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.961955 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2drg\" (UniqueName: \"kubernetes.io/projected/501304ef-40ab-490a-8df7-77f1804b4f80-kube-api-access-r2drg\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.961992 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.962059 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-config\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.962142 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlvm7\" (UniqueName: \"kubernetes.io/projected/48e6547a-641f-46ba-9d0b-977d59a8f401-kube-api-access-rlvm7\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.962988 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-config\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.963264 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.966696 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.967631 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:45 crc kubenswrapper[4890]: I0121 15:51:45.989385 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2drg\" (UniqueName: \"kubernetes.io/projected/501304ef-40ab-490a-8df7-77f1804b4f80-kube-api-access-r2drg\") pod \"dnsmasq-dns-757dc6fff9-6m4nj\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.019493 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.027726 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-crcn7"] Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.036548 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-crcn7"] Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.086158 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.096723 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.105717 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.167058 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-dns-svc\") pod \"fd648b5d-a2fc-4618-bef7-612e7593065f\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.167187 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82p9g\" (UniqueName: \"kubernetes.io/projected/fd648b5d-a2fc-4618-bef7-612e7593065f-kube-api-access-82p9g\") pod \"fd648b5d-a2fc-4618-bef7-612e7593065f\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.167211 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-config\") pod \"fd648b5d-a2fc-4618-bef7-612e7593065f\" (UID: \"fd648b5d-a2fc-4618-bef7-612e7593065f\") " Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.167854 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd648b5d-a2fc-4618-bef7-612e7593065f" (UID: "fd648b5d-a2fc-4618-bef7-612e7593065f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.169906 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-config" (OuterVolumeSpecName: "config") pod "fd648b5d-a2fc-4618-bef7-612e7593065f" (UID: "fd648b5d-a2fc-4618-bef7-612e7593065f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.177114 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd648b5d-a2fc-4618-bef7-612e7593065f-kube-api-access-82p9g" (OuterVolumeSpecName: "kube-api-access-82p9g") pod "fd648b5d-a2fc-4618-bef7-612e7593065f" (UID: "fd648b5d-a2fc-4618-bef7-612e7593065f"). InnerVolumeSpecName "kube-api-access-82p9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.268615 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.268646 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82p9g\" (UniqueName: \"kubernetes.io/projected/fd648b5d-a2fc-4618-bef7-612e7593065f-kube-api-access-82p9g\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.268661 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd648b5d-a2fc-4618-bef7-612e7593065f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.312631 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5wh28"] Jan 21 15:51:46 crc kubenswrapper[4890]: W0121 15:51:46.313076 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57d2ee81_accb_4ff7_8fa6_52ed7d728258.slice/crio-f682bc92c61a33cc5de65221dcbf0ebf1fde6e5e3faceb23a8305e76d84ec7f1 WatchSource:0}: Error finding container f682bc92c61a33cc5de65221dcbf0ebf1fde6e5e3faceb23a8305e76d84ec7f1: Status 404 returned error can't find the container with id f682bc92c61a33cc5de65221dcbf0ebf1fde6e5e3faceb23a8305e76d84ec7f1 Jan 21 15:51:46 crc kubenswrapper[4890]: W0121 15:51:46.433965 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a8e5a07_276f_4967_bd9e_dcf0d2cc70bd.slice/crio-f8e2707c282e2b1319af3af7a881ef48d4f5757b639fded5941e7d005c2b9671 WatchSource:0}: Error finding container f8e2707c282e2b1319af3af7a881ef48d4f5757b639fded5941e7d005c2b9671: Status 404 returned error can't find the container with id f8e2707c282e2b1319af3af7a881ef48d4f5757b639fded5941e7d005c2b9671 Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.453737 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-c4t69"] Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.463463 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-6m4nj"] Jan 21 15:51:46 crc kubenswrapper[4890]: W0121 15:51:46.476645 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod501304ef_40ab_490a_8df7_77f1804b4f80.slice/crio-626d346bba74b79f1f53deed5a8f33f2164e40e825d82ac5df6e9e476c84cd8c WatchSource:0}: Error finding container 626d346bba74b79f1f53deed5a8f33f2164e40e825d82ac5df6e9e476c84cd8c: Status 404 returned error can't find the container with id 626d346bba74b79f1f53deed5a8f33f2164e40e825d82ac5df6e9e476c84cd8c Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.540229 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.541724 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.545604 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.545804 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.545823 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.546414 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-s66c4" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.587258 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.680636 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.680689 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.680738 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bnf\" (UniqueName: \"kubernetes.io/projected/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-kube-api-access-c7bnf\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.680775 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-config\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.680797 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.680832 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.680894 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-scripts\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.782541 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.782588 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.782637 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bnf\" (UniqueName: \"kubernetes.io/projected/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-kube-api-access-c7bnf\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.782679 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-config\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.782705 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.782753 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.782845 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-scripts\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.783841 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-config\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.783943 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-scripts\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.784964 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.789232 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.789985 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.793835 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.807384 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bnf\" (UniqueName: \"kubernetes.io/projected/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-kube-api-access-c7bnf\") pod \"ovn-northd-0\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.919673 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.956459 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" event={"ID":"501304ef-40ab-490a-8df7-77f1804b4f80","Type":"ContainerStarted","Data":"626d346bba74b79f1f53deed5a8f33f2164e40e825d82ac5df6e9e476c84cd8c"} Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.957600 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" event={"ID":"fd648b5d-a2fc-4618-bef7-612e7593065f","Type":"ContainerDied","Data":"04dd653f97dc289a84cad1c1b583a184e6481a4dae2890ccf7a9210a44fd01c4"} Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.957656 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-tzkm7" Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.961510 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wh28" event={"ID":"57d2ee81-accb-4ff7-8fa6-52ed7d728258","Type":"ContainerStarted","Data":"e1aa6bfb45b550829709119ceae8ae53f1b530480df2a6e2a81fbe2d0d43a190"} Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.961544 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wh28" event={"ID":"57d2ee81-accb-4ff7-8fa6-52ed7d728258","Type":"ContainerStarted","Data":"f682bc92c61a33cc5de65221dcbf0ebf1fde6e5e3faceb23a8305e76d84ec7f1"} Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.963385 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" event={"ID":"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd","Type":"ContainerStarted","Data":"f8e2707c282e2b1319af3af7a881ef48d4f5757b639fded5941e7d005c2b9671"} Jan 21 15:51:46 crc kubenswrapper[4890]: I0121 15:51:46.987611 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5wh28" podStartSLOduration=1.987586936 podStartE2EDuration="1.987586936s" podCreationTimestamp="2026-01-21 15:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:46.983744681 +0000 UTC m=+1189.345187090" watchObservedRunningTime="2026-01-21 15:51:46.987586936 +0000 UTC m=+1189.349029345" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.053651 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-tzkm7"] Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.059966 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-tzkm7"] Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.375102 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.441286 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:51:47 crc kubenswrapper[4890]: W0121 15:51:47.445678 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod332f4b6c_7fea_4dae_bb46_3c35ee84ba25.slice/crio-547edafaa7999af09852c07995b5137528252aa181ddc52eb350cd445c381aee WatchSource:0}: Error finding container 547edafaa7999af09852c07995b5137528252aa181ddc52eb350cd445c381aee: Status 404 returned error can't find the container with id 547edafaa7999af09852c07995b5137528252aa181ddc52eb350cd445c381aee Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.463638 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-6m4nj"] Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.513858 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-lwp7d"] Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.515692 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.539643 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-lwp7d"] Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.605228 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.605310 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.605371 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxts\" (UniqueName: \"kubernetes.io/projected/d33597fc-f17b-4c75-ad8d-2519551825f1-kube-api-access-jnxts\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.605427 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.605516 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-config\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.707237 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.707362 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.707408 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxts\" (UniqueName: \"kubernetes.io/projected/d33597fc-f17b-4c75-ad8d-2519551825f1-kube-api-access-jnxts\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.707469 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.707530 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-config\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.708740 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.708772 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-config\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.708790 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.709316 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.725738 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxts\" (UniqueName: \"kubernetes.io/projected/d33597fc-f17b-4c75-ad8d-2519551825f1-kube-api-access-jnxts\") pod \"dnsmasq-dns-6cb545bd4c-lwp7d\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.897553 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.930740 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e6547a-641f-46ba-9d0b-977d59a8f401" path="/var/lib/kubelet/pods/48e6547a-641f-46ba-9d0b-977d59a8f401/volumes" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.931084 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd648b5d-a2fc-4618-bef7-612e7593065f" path="/var/lib/kubelet/pods/fd648b5d-a2fc-4618-bef7-612e7593065f/volumes" Jan 21 15:51:47 crc kubenswrapper[4890]: I0121 15:51:47.980535 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"332f4b6c-7fea-4dae-bb46-3c35ee84ba25","Type":"ContainerStarted","Data":"547edafaa7999af09852c07995b5137528252aa181ddc52eb350cd445c381aee"} Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.127713 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-lwp7d"] Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.581299 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.587295 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.591822 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.591954 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gmq76" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.591993 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.592406 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.605395 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.738343 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-lock\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.738724 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.738759 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9xx\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-kube-api-access-qs9xx\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.738833 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-cache\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.738856 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.761944 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.761999 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840050 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-lock\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840110 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840148 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9xx\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-kube-api-access-qs9xx\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840229 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-cache\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840260 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: E0121 15:51:48.840327 4890 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 15:51:48 crc kubenswrapper[4890]: E0121 15:51:48.840385 4890 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 15:51:48 crc kubenswrapper[4890]: E0121 15:51:48.840450 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift podName:e7d46fba-02db-42e1-a916-1b2528bbdd52 nodeName:}" failed. No retries permitted until 2026-01-21 15:51:49.340428882 +0000 UTC m=+1191.701871301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift") pod "swift-storage-0" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52") : configmap "swift-ring-files" not found Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840668 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840777 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-cache\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.840987 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-lock\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.864484 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9xx\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-kube-api-access-qs9xx\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.864642 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.909832 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-w5wrv"] Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.910876 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.914798 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.915057 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.924803 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.944635 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-w5wrv"] Jan 21 15:51:48 crc kubenswrapper[4890]: E0121 15:51:48.964623 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-rbnbb ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-w5wrv" podUID="ef1ac397-0b3f-44f3-94e6-8809e031b04d" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.967500 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zk2ll"] Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.968772 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.976405 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zk2ll"] Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.990252 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-w5wrv"] Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.995648 4890 generic.go:334] "Generic (PLEG): container finished" podID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerID="2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd" exitCode=0 Jan 21 15:51:48 crc kubenswrapper[4890]: I0121 15:51:48.995703 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cc7a8eb5-11e0-4603-b80a-3b4f6e724770","Type":"ContainerDied","Data":"2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd"} Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.004552 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.005389 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" event={"ID":"d33597fc-f17b-4c75-ad8d-2519551825f1","Type":"ContainerStarted","Data":"20b5942db7803fdbbd887b8716094b589b3f59c8e42f3fcb91b4231e626bab7b"} Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.018090 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.042889 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-scripts\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.042981 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbnbb\" (UniqueName: \"kubernetes.io/projected/ef1ac397-0b3f-44f3-94e6-8809e031b04d-kube-api-access-rbnbb\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.043003 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ef1ac397-0b3f-44f3-94e6-8809e031b04d-etc-swift\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.043118 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-swiftconf\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.043184 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-dispersionconf\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.043228 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-combined-ca-bundle\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.043251 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-ring-data-devices\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.144620 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-dispersionconf\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.144698 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-combined-ca-bundle\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.144731 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-ring-data-devices\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.144806 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-scripts\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.144841 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-scripts\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.144868 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-dispersionconf\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.144937 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-ring-data-devices\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145218 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-combined-ca-bundle\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145305 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbnbb\" (UniqueName: \"kubernetes.io/projected/ef1ac397-0b3f-44f3-94e6-8809e031b04d-kube-api-access-rbnbb\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145384 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ef1ac397-0b3f-44f3-94e6-8809e031b04d-etc-swift\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145465 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-swiftconf\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145493 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a5da046-eade-47f6-91bc-2f25e44a4c85-etc-swift\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145579 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-scripts\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145587 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbch\" (UniqueName: \"kubernetes.io/projected/7a5da046-eade-47f6-91bc-2f25e44a4c85-kube-api-access-qfbch\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145643 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-swiftconf\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.145675 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ef1ac397-0b3f-44f3-94e6-8809e031b04d-etc-swift\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.146064 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-ring-data-devices\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.148548 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-swiftconf\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.152759 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-dispersionconf\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.153629 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-combined-ca-bundle\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.168438 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbnbb\" (UniqueName: \"kubernetes.io/projected/ef1ac397-0b3f-44f3-94e6-8809e031b04d-kube-api-access-rbnbb\") pod \"swift-ring-rebalance-w5wrv\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.247650 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-swiftconf\") pod \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.247767 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-dispersionconf\") pod \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.247865 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-ring-data-devices\") pod \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.247959 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-combined-ca-bundle\") pod \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.247982 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ef1ac397-0b3f-44f3-94e6-8809e031b04d-etc-swift\") pod \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248021 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-scripts\") pod \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248323 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ef1ac397-0b3f-44f3-94e6-8809e031b04d" (UID: "ef1ac397-0b3f-44f3-94e6-8809e031b04d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248376 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-ring-data-devices\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248546 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-combined-ca-bundle\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248554 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef1ac397-0b3f-44f3-94e6-8809e031b04d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ef1ac397-0b3f-44f3-94e6-8809e031b04d" (UID: "ef1ac397-0b3f-44f3-94e6-8809e031b04d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248696 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-swiftconf\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248707 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-scripts" (OuterVolumeSpecName: "scripts") pod "ef1ac397-0b3f-44f3-94e6-8809e031b04d" (UID: "ef1ac397-0b3f-44f3-94e6-8809e031b04d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248723 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a5da046-eade-47f6-91bc-2f25e44a4c85-etc-swift\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248797 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbch\" (UniqueName: \"kubernetes.io/projected/7a5da046-eade-47f6-91bc-2f25e44a4c85-kube-api-access-qfbch\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.248991 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-scripts\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.249019 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-dispersionconf\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.249072 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a5da046-eade-47f6-91bc-2f25e44a4c85-etc-swift\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.249080 4890 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.249097 4890 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ef1ac397-0b3f-44f3-94e6-8809e031b04d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.249108 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ac397-0b3f-44f3-94e6-8809e031b04d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.249154 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-ring-data-devices\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.249661 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-scripts\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.250921 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ef1ac397-0b3f-44f3-94e6-8809e031b04d" (UID: "ef1ac397-0b3f-44f3-94e6-8809e031b04d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.251985 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-swiftconf\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.252914 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-dispersionconf\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.252969 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ef1ac397-0b3f-44f3-94e6-8809e031b04d" (UID: "ef1ac397-0b3f-44f3-94e6-8809e031b04d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.253137 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-combined-ca-bundle\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.266234 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef1ac397-0b3f-44f3-94e6-8809e031b04d" (UID: "ef1ac397-0b3f-44f3-94e6-8809e031b04d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.270897 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbch\" (UniqueName: \"kubernetes.io/projected/7a5da046-eade-47f6-91bc-2f25e44a4c85-kube-api-access-qfbch\") pod \"swift-ring-rebalance-zk2ll\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.291098 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.349687 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbnbb\" (UniqueName: \"kubernetes.io/projected/ef1ac397-0b3f-44f3-94e6-8809e031b04d-kube-api-access-rbnbb\") pod \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\" (UID: \"ef1ac397-0b3f-44f3-94e6-8809e031b04d\") " Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.350012 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.350172 4890 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.350191 4890 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.350203 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1ac397-0b3f-44f3-94e6-8809e031b04d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:49 crc kubenswrapper[4890]: E0121 15:51:49.350318 4890 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 15:51:49 crc kubenswrapper[4890]: E0121 15:51:49.350333 4890 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 15:51:49 crc kubenswrapper[4890]: E0121 15:51:49.350407 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift podName:e7d46fba-02db-42e1-a916-1b2528bbdd52 nodeName:}" failed. No retries permitted until 2026-01-21 15:51:50.350388787 +0000 UTC m=+1192.711831196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift") pod "swift-storage-0" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52") : configmap "swift-ring-files" not found Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.352992 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1ac397-0b3f-44f3-94e6-8809e031b04d-kube-api-access-rbnbb" (OuterVolumeSpecName: "kube-api-access-rbnbb") pod "ef1ac397-0b3f-44f3-94e6-8809e031b04d" (UID: "ef1ac397-0b3f-44f3-94e6-8809e031b04d"). InnerVolumeSpecName "kube-api-access-rbnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.452080 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbnbb\" (UniqueName: \"kubernetes.io/projected/ef1ac397-0b3f-44f3-94e6-8809e031b04d-kube-api-access-rbnbb\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:49 crc kubenswrapper[4890]: I0121 15:51:49.879334 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zk2ll"] Jan 21 15:51:49 crc kubenswrapper[4890]: W0121 15:51:49.887667 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5da046_eade_47f6_91bc_2f25e44a4c85.slice/crio-7daaeadc52af9b3a929dc2619df8d899f09e0ff55c69edbc5483882867b6426e WatchSource:0}: Error finding container 7daaeadc52af9b3a929dc2619df8d899f09e0ff55c69edbc5483882867b6426e: Status 404 returned error can't find the container with id 7daaeadc52af9b3a929dc2619df8d899f09e0ff55c69edbc5483882867b6426e Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.025318 4890 generic.go:334] "Generic (PLEG): container finished" podID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerID="59497b223247831b279cdfa5ba3775de27963179a11e66f62b77dfe3cf22bb5a" exitCode=0 Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.025698 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" event={"ID":"d33597fc-f17b-4c75-ad8d-2519551825f1","Type":"ContainerDied","Data":"59497b223247831b279cdfa5ba3775de27963179a11e66f62b77dfe3cf22bb5a"} Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.029343 4890 generic.go:334] "Generic (PLEG): container finished" podID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerID="47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a" exitCode=0 Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.029426 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" event={"ID":"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd","Type":"ContainerDied","Data":"47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a"} Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.032439 4890 generic.go:334] "Generic (PLEG): container finished" podID="501304ef-40ab-490a-8df7-77f1804b4f80" containerID="de5c8eb94b0592cd2381130fcbef7640ad6cc522337aae84ec0e7be3bd126e5e" exitCode=0 Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.032492 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" event={"ID":"501304ef-40ab-490a-8df7-77f1804b4f80","Type":"ContainerDied","Data":"de5c8eb94b0592cd2381130fcbef7640ad6cc522337aae84ec0e7be3bd126e5e"} Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.039082 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cc7a8eb5-11e0-4603-b80a-3b4f6e724770","Type":"ContainerStarted","Data":"eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341"} Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.039836 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w5wrv" Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.040268 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zk2ll" event={"ID":"7a5da046-eade-47f6-91bc-2f25e44a4c85","Type":"ContainerStarted","Data":"7daaeadc52af9b3a929dc2619df8d899f09e0ff55c69edbc5483882867b6426e"} Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.123863 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-w5wrv"] Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.145515 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-w5wrv"] Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.148724 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371988.706076 podStartE2EDuration="48.148700564s" podCreationTimestamp="2026-01-21 15:51:02 +0000 UTC" firstStartedPulling="2026-01-21 15:51:04.19078081 +0000 UTC m=+1146.552223219" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:50.124940545 +0000 UTC m=+1192.486382964" watchObservedRunningTime="2026-01-21 15:51:50.148700564 +0000 UTC m=+1192.510142983" Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.372220 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:50 crc kubenswrapper[4890]: E0121 15:51:50.372389 4890 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 15:51:50 crc kubenswrapper[4890]: E0121 15:51:50.372410 4890 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 15:51:50 crc kubenswrapper[4890]: E0121 15:51:50.372469 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift podName:e7d46fba-02db-42e1-a916-1b2528bbdd52 nodeName:}" failed. No retries permitted until 2026-01-21 15:51:52.372450212 +0000 UTC m=+1194.733892621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift") pod "swift-storage-0" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52") : configmap "swift-ring-files" not found Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.815839 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.981839 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-dns-svc\") pod \"501304ef-40ab-490a-8df7-77f1804b4f80\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.981934 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2drg\" (UniqueName: \"kubernetes.io/projected/501304ef-40ab-490a-8df7-77f1804b4f80-kube-api-access-r2drg\") pod \"501304ef-40ab-490a-8df7-77f1804b4f80\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.982075 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-sb\") pod \"501304ef-40ab-490a-8df7-77f1804b4f80\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.982202 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-config\") pod \"501304ef-40ab-490a-8df7-77f1804b4f80\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.982339 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-nb\") pod \"501304ef-40ab-490a-8df7-77f1804b4f80\" (UID: \"501304ef-40ab-490a-8df7-77f1804b4f80\") " Jan 21 15:51:50 crc kubenswrapper[4890]: I0121 15:51:50.989602 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501304ef-40ab-490a-8df7-77f1804b4f80-kube-api-access-r2drg" (OuterVolumeSpecName: "kube-api-access-r2drg") pod "501304ef-40ab-490a-8df7-77f1804b4f80" (UID: "501304ef-40ab-490a-8df7-77f1804b4f80"). InnerVolumeSpecName "kube-api-access-r2drg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.009238 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-config" (OuterVolumeSpecName: "config") pod "501304ef-40ab-490a-8df7-77f1804b4f80" (UID: "501304ef-40ab-490a-8df7-77f1804b4f80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.009411 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "501304ef-40ab-490a-8df7-77f1804b4f80" (UID: "501304ef-40ab-490a-8df7-77f1804b4f80"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.019784 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "501304ef-40ab-490a-8df7-77f1804b4f80" (UID: "501304ef-40ab-490a-8df7-77f1804b4f80"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.022047 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "501304ef-40ab-490a-8df7-77f1804b4f80" (UID: "501304ef-40ab-490a-8df7-77f1804b4f80"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.048175 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" event={"ID":"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd","Type":"ContainerStarted","Data":"b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8"} Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.051155 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" event={"ID":"501304ef-40ab-490a-8df7-77f1804b4f80","Type":"ContainerDied","Data":"626d346bba74b79f1f53deed5a8f33f2164e40e825d82ac5df6e9e476c84cd8c"} Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.051186 4890 scope.go:117] "RemoveContainer" containerID="de5c8eb94b0592cd2381130fcbef7640ad6cc522337aae84ec0e7be3bd126e5e" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.051287 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-6m4nj" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.064139 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" event={"ID":"d33597fc-f17b-4c75-ad8d-2519551825f1","Type":"ContainerStarted","Data":"54103974d785a15465d49e05a426b57cf3718efe8e315a495157785f5fdd81ce"} Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.085111 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.085142 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.085152 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2drg\" (UniqueName: \"kubernetes.io/projected/501304ef-40ab-490a-8df7-77f1804b4f80-kube-api-access-r2drg\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.085162 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.085171 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501304ef-40ab-490a-8df7-77f1804b4f80-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.143826 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-6m4nj"] Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.154273 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-6m4nj"] Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.934814 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501304ef-40ab-490a-8df7-77f1804b4f80" path="/var/lib/kubelet/pods/501304ef-40ab-490a-8df7-77f1804b4f80/volumes" Jan 21 15:51:51 crc kubenswrapper[4890]: I0121 15:51:51.935450 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1ac397-0b3f-44f3-94e6-8809e031b04d" path="/var/lib/kubelet/pods/ef1ac397-0b3f-44f3-94e6-8809e031b04d/volumes" Jan 21 15:51:52 crc kubenswrapper[4890]: I0121 15:51:52.077576 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:52 crc kubenswrapper[4890]: I0121 15:51:52.099069 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" podStartSLOduration=4.186289378 podStartE2EDuration="7.099051867s" podCreationTimestamp="2026-01-21 15:51:45 +0000 UTC" firstStartedPulling="2026-01-21 15:51:46.438926791 +0000 UTC m=+1188.800369200" lastFinishedPulling="2026-01-21 15:51:49.35168928 +0000 UTC m=+1191.713131689" observedRunningTime="2026-01-21 15:51:52.092465744 +0000 UTC m=+1194.453908163" watchObservedRunningTime="2026-01-21 15:51:52.099051867 +0000 UTC m=+1194.460494276" Jan 21 15:51:52 crc kubenswrapper[4890]: I0121 15:51:52.111696 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" podStartSLOduration=3.86679376 podStartE2EDuration="5.11167745s" podCreationTimestamp="2026-01-21 15:51:47 +0000 UTC" firstStartedPulling="2026-01-21 15:51:48.130408615 +0000 UTC m=+1190.491851024" lastFinishedPulling="2026-01-21 15:51:49.375292305 +0000 UTC m=+1191.736734714" observedRunningTime="2026-01-21 15:51:52.110155692 +0000 UTC m=+1194.471598101" watchObservedRunningTime="2026-01-21 15:51:52.11167745 +0000 UTC m=+1194.473119859" Jan 21 15:51:52 crc kubenswrapper[4890]: I0121 15:51:52.407479 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:52 crc kubenswrapper[4890]: E0121 15:51:52.407755 4890 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 15:51:52 crc kubenswrapper[4890]: E0121 15:51:52.407770 4890 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 15:51:52 crc kubenswrapper[4890]: E0121 15:51:52.407812 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift podName:e7d46fba-02db-42e1-a916-1b2528bbdd52 nodeName:}" failed. No retries permitted until 2026-01-21 15:51:56.407798933 +0000 UTC m=+1198.769241342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift") pod "swift-storage-0" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52") : configmap "swift-ring-files" not found Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.084024 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"332f4b6c-7fea-4dae-bb46-3c35ee84ba25","Type":"ContainerStarted","Data":"e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0"} Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.507868 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.508182 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.856009 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zv979"] Jan 21 15:51:53 crc kubenswrapper[4890]: E0121 15:51:53.856444 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501304ef-40ab-490a-8df7-77f1804b4f80" containerName="init" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.856465 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="501304ef-40ab-490a-8df7-77f1804b4f80" containerName="init" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.856659 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="501304ef-40ab-490a-8df7-77f1804b4f80" containerName="init" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.857246 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zv979" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.860103 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.862539 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zv979"] Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.951377 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf7kx\" (UniqueName: \"kubernetes.io/projected/c21ea68a-12a6-4264-ba83-48a08b34fc91-kube-api-access-lf7kx\") pod \"root-account-create-update-zv979\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " pod="openstack/root-account-create-update-zv979" Jan 21 15:51:53 crc kubenswrapper[4890]: I0121 15:51:53.951779 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21ea68a-12a6-4264-ba83-48a08b34fc91-operator-scripts\") pod \"root-account-create-update-zv979\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " pod="openstack/root-account-create-update-zv979" Jan 21 15:51:54 crc kubenswrapper[4890]: I0121 15:51:54.054513 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf7kx\" (UniqueName: \"kubernetes.io/projected/c21ea68a-12a6-4264-ba83-48a08b34fc91-kube-api-access-lf7kx\") pod \"root-account-create-update-zv979\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " pod="openstack/root-account-create-update-zv979" Jan 21 15:51:54 crc kubenswrapper[4890]: I0121 15:51:54.054602 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21ea68a-12a6-4264-ba83-48a08b34fc91-operator-scripts\") pod \"root-account-create-update-zv979\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " pod="openstack/root-account-create-update-zv979" Jan 21 15:51:54 crc kubenswrapper[4890]: I0121 15:51:54.055973 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21ea68a-12a6-4264-ba83-48a08b34fc91-operator-scripts\") pod \"root-account-create-update-zv979\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " pod="openstack/root-account-create-update-zv979" Jan 21 15:51:54 crc kubenswrapper[4890]: I0121 15:51:54.074117 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf7kx\" (UniqueName: \"kubernetes.io/projected/c21ea68a-12a6-4264-ba83-48a08b34fc91-kube-api-access-lf7kx\") pod \"root-account-create-update-zv979\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " pod="openstack/root-account-create-update-zv979" Jan 21 15:51:54 crc kubenswrapper[4890]: I0121 15:51:54.213556 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zv979" Jan 21 15:51:54 crc kubenswrapper[4890]: I0121 15:51:54.639645 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zv979"] Jan 21 15:51:54 crc kubenswrapper[4890]: W0121 15:51:54.647730 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc21ea68a_12a6_4264_ba83_48a08b34fc91.slice/crio-44f8426f0504001bd59e74c6c42b66caaefaa754424896336e5678edee660db9 WatchSource:0}: Error finding container 44f8426f0504001bd59e74c6c42b66caaefaa754424896336e5678edee660db9: Status 404 returned error can't find the container with id 44f8426f0504001bd59e74c6c42b66caaefaa754424896336e5678edee660db9 Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.104585 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zv979" event={"ID":"c21ea68a-12a6-4264-ba83-48a08b34fc91","Type":"ContainerStarted","Data":"24cdf8018a6fb626f9f7db1ab7f2df7363bbc909a1e0c7cf765077b74524c60b"} Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.104936 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zv979" event={"ID":"c21ea68a-12a6-4264-ba83-48a08b34fc91","Type":"ContainerStarted","Data":"44f8426f0504001bd59e74c6c42b66caaefaa754424896336e5678edee660db9"} Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.106244 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"332f4b6c-7fea-4dae-bb46-3c35ee84ba25","Type":"ContainerStarted","Data":"abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052"} Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.106391 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.122681 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zv979" podStartSLOduration=2.122659825 podStartE2EDuration="2.122659825s" podCreationTimestamp="2026-01-21 15:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:55.117898346 +0000 UTC m=+1197.479340755" watchObservedRunningTime="2026-01-21 15:51:55.122659825 +0000 UTC m=+1197.484102234" Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.146668 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.409295793 podStartE2EDuration="9.146647979s" podCreationTimestamp="2026-01-21 15:51:46 +0000 UTC" firstStartedPulling="2026-01-21 15:51:47.448612828 +0000 UTC m=+1189.810055237" lastFinishedPulling="2026-01-21 15:51:51.185965014 +0000 UTC m=+1193.547407423" observedRunningTime="2026-01-21 15:51:55.140864206 +0000 UTC m=+1197.502306615" watchObservedRunningTime="2026-01-21 15:51:55.146647979 +0000 UTC m=+1197.508090388" Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.648695 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:55 crc kubenswrapper[4890]: I0121 15:51:55.651254 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:56 crc kubenswrapper[4890]: I0121 15:51:56.115134 4890 generic.go:334] "Generic (PLEG): container finished" podID="c21ea68a-12a6-4264-ba83-48a08b34fc91" containerID="24cdf8018a6fb626f9f7db1ab7f2df7363bbc909a1e0c7cf765077b74524c60b" exitCode=0 Jan 21 15:51:56 crc kubenswrapper[4890]: I0121 15:51:56.115270 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zv979" event={"ID":"c21ea68a-12a6-4264-ba83-48a08b34fc91","Type":"ContainerDied","Data":"24cdf8018a6fb626f9f7db1ab7f2df7363bbc909a1e0c7cf765077b74524c60b"} Jan 21 15:51:56 crc kubenswrapper[4890]: I0121 15:51:56.497660 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:51:56 crc kubenswrapper[4890]: E0121 15:51:56.497880 4890 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 15:51:56 crc kubenswrapper[4890]: E0121 15:51:56.497914 4890 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 15:51:56 crc kubenswrapper[4890]: E0121 15:51:56.497982 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift podName:e7d46fba-02db-42e1-a916-1b2528bbdd52 nodeName:}" failed. No retries permitted until 2026-01-21 15:52:04.497960789 +0000 UTC m=+1206.859403198 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift") pod "swift-storage-0" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52") : configmap "swift-ring-files" not found Jan 21 15:51:57 crc kubenswrapper[4890]: I0121 15:51:57.585891 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 15:51:57 crc kubenswrapper[4890]: I0121 15:51:57.664010 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 15:51:57 crc kubenswrapper[4890]: I0121 15:51:57.898550 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:51:57 crc kubenswrapper[4890]: I0121 15:51:57.980369 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-c4t69"] Jan 21 15:51:57 crc kubenswrapper[4890]: I0121 15:51:57.980795 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" podUID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerName="dnsmasq-dns" containerID="cri-o://b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8" gracePeriod=10 Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.548588 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zv979" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.639731 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf7kx\" (UniqueName: \"kubernetes.io/projected/c21ea68a-12a6-4264-ba83-48a08b34fc91-kube-api-access-lf7kx\") pod \"c21ea68a-12a6-4264-ba83-48a08b34fc91\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.639790 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21ea68a-12a6-4264-ba83-48a08b34fc91-operator-scripts\") pod \"c21ea68a-12a6-4264-ba83-48a08b34fc91\" (UID: \"c21ea68a-12a6-4264-ba83-48a08b34fc91\") " Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.641187 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21ea68a-12a6-4264-ba83-48a08b34fc91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c21ea68a-12a6-4264-ba83-48a08b34fc91" (UID: "c21ea68a-12a6-4264-ba83-48a08b34fc91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.644509 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21ea68a-12a6-4264-ba83-48a08b34fc91-kube-api-access-lf7kx" (OuterVolumeSpecName: "kube-api-access-lf7kx") pod "c21ea68a-12a6-4264-ba83-48a08b34fc91" (UID: "c21ea68a-12a6-4264-ba83-48a08b34fc91"). InnerVolumeSpecName "kube-api-access-lf7kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.650116 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.742055 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5lw\" (UniqueName: \"kubernetes.io/projected/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-kube-api-access-bp5lw\") pod \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.742169 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-ovsdbserver-nb\") pod \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.742232 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-config\") pod \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.742489 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-dns-svc\") pod \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\" (UID: \"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd\") " Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.743049 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf7kx\" (UniqueName: \"kubernetes.io/projected/c21ea68a-12a6-4264-ba83-48a08b34fc91-kube-api-access-lf7kx\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.743077 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21ea68a-12a6-4264-ba83-48a08b34fc91-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.756165 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-kube-api-access-bp5lw" (OuterVolumeSpecName: "kube-api-access-bp5lw") pod "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" (UID: "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd"). InnerVolumeSpecName "kube-api-access-bp5lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.796493 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" (UID: "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.814557 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-config" (OuterVolumeSpecName: "config") pod "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" (UID: "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.817394 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" (UID: "5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.844488 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.844518 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5lw\" (UniqueName: \"kubernetes.io/projected/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-kube-api-access-bp5lw\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.844530 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:58 crc kubenswrapper[4890]: I0121 15:51:58.844543 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.151668 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zv979" event={"ID":"c21ea68a-12a6-4264-ba83-48a08b34fc91","Type":"ContainerDied","Data":"44f8426f0504001bd59e74c6c42b66caaefaa754424896336e5678edee660db9"} Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.151720 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f8426f0504001bd59e74c6c42b66caaefaa754424896336e5678edee660db9" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.151727 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zv979" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.153297 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zk2ll" event={"ID":"7a5da046-eade-47f6-91bc-2f25e44a4c85","Type":"ContainerStarted","Data":"a2062123071fef8060317420f2552a337427a5f0cbff7ebea4e49477ef056fa3"} Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.155840 4890 generic.go:334] "Generic (PLEG): container finished" podID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerID="b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8" exitCode=0 Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.155870 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" event={"ID":"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd","Type":"ContainerDied","Data":"b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8"} Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.155886 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" event={"ID":"5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd","Type":"ContainerDied","Data":"f8e2707c282e2b1319af3af7a881ef48d4f5757b639fded5941e7d005c2b9671"} Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.155888 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-c4t69" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.155903 4890 scope.go:117] "RemoveContainer" containerID="b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.180836 4890 scope.go:117] "RemoveContainer" containerID="47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.181940 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zk2ll" podStartSLOduration=2.700697723 podStartE2EDuration="11.181834372s" podCreationTimestamp="2026-01-21 15:51:48 +0000 UTC" firstStartedPulling="2026-01-21 15:51:49.889795944 +0000 UTC m=+1192.251238353" lastFinishedPulling="2026-01-21 15:51:58.370932573 +0000 UTC m=+1200.732375002" observedRunningTime="2026-01-21 15:51:59.173483225 +0000 UTC m=+1201.534925634" watchObservedRunningTime="2026-01-21 15:51:59.181834372 +0000 UTC m=+1201.543276781" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.198863 4890 scope.go:117] "RemoveContainer" containerID="b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.201004 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-c4t69"] Jan 21 15:51:59 crc kubenswrapper[4890]: E0121 15:51:59.201502 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8\": container with ID starting with b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8 not found: ID does not exist" containerID="b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.201539 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8"} err="failed to get container status \"b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8\": rpc error: code = NotFound desc = could not find container \"b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8\": container with ID starting with b30160f510109be8b5249014ea56348a09859fc3e4141464e62c5778f123e0a8 not found: ID does not exist" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.201564 4890 scope.go:117] "RemoveContainer" containerID="47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a" Jan 21 15:51:59 crc kubenswrapper[4890]: E0121 15:51:59.201872 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a\": container with ID starting with 47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a not found: ID does not exist" containerID="47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.201902 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a"} err="failed to get container status \"47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a\": rpc error: code = NotFound desc = could not find container \"47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a\": container with ID starting with 47a506ef5f897f4447a5be990053552322bdb6213f9268df4dd211e72be40e2a not found: ID does not exist" Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.209580 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-c4t69"] Jan 21 15:51:59 crc kubenswrapper[4890]: I0121 15:51:59.931422 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" path="/var/lib/kubelet/pods/5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd/volumes" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.516604 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nknm8"] Jan 21 15:52:00 crc kubenswrapper[4890]: E0121 15:52:00.516958 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerName="init" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.516970 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerName="init" Jan 21 15:52:00 crc kubenswrapper[4890]: E0121 15:52:00.517002 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerName="dnsmasq-dns" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.517010 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerName="dnsmasq-dns" Jan 21 15:52:00 crc kubenswrapper[4890]: E0121 15:52:00.517046 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21ea68a-12a6-4264-ba83-48a08b34fc91" containerName="mariadb-account-create-update" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.517055 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21ea68a-12a6-4264-ba83-48a08b34fc91" containerName="mariadb-account-create-update" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.517242 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21ea68a-12a6-4264-ba83-48a08b34fc91" containerName="mariadb-account-create-update" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.517267 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a8e5a07-276f-4967-bd9e-dcf0d2cc70bd" containerName="dnsmasq-dns" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.517959 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.529593 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nknm8"] Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.616277 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6c5-account-create-update-h5s54"] Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.617299 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.621277 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.626526 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6c5-account-create-update-h5s54"] Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.674628 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295fdf34-4879-4ba5-993a-424850ac8e46-operator-scripts\") pod \"glance-db-create-nknm8\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.674706 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8skk\" (UniqueName: \"kubernetes.io/projected/295fdf34-4879-4ba5-993a-424850ac8e46-kube-api-access-h8skk\") pod \"glance-db-create-nknm8\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.776976 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295fdf34-4879-4ba5-993a-424850ac8e46-operator-scripts\") pod \"glance-db-create-nknm8\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.777064 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c48566-f878-4861-ac44-4e2ea1c107a4-operator-scripts\") pod \"glance-e6c5-account-create-update-h5s54\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.777102 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnzq\" (UniqueName: \"kubernetes.io/projected/b8c48566-f878-4861-ac44-4e2ea1c107a4-kube-api-access-8lnzq\") pod \"glance-e6c5-account-create-update-h5s54\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.777136 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8skk\" (UniqueName: \"kubernetes.io/projected/295fdf34-4879-4ba5-993a-424850ac8e46-kube-api-access-h8skk\") pod \"glance-db-create-nknm8\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.777854 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295fdf34-4879-4ba5-993a-424850ac8e46-operator-scripts\") pod \"glance-db-create-nknm8\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.801671 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8skk\" (UniqueName: \"kubernetes.io/projected/295fdf34-4879-4ba5-993a-424850ac8e46-kube-api-access-h8skk\") pod \"glance-db-create-nknm8\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.833956 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nknm8" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.879298 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c48566-f878-4861-ac44-4e2ea1c107a4-operator-scripts\") pod \"glance-e6c5-account-create-update-h5s54\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.879380 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lnzq\" (UniqueName: \"kubernetes.io/projected/b8c48566-f878-4861-ac44-4e2ea1c107a4-kube-api-access-8lnzq\") pod \"glance-e6c5-account-create-update-h5s54\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.880467 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c48566-f878-4861-ac44-4e2ea1c107a4-operator-scripts\") pod \"glance-e6c5-account-create-update-h5s54\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.907195 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lnzq\" (UniqueName: \"kubernetes.io/projected/b8c48566-f878-4861-ac44-4e2ea1c107a4-kube-api-access-8lnzq\") pod \"glance-e6c5-account-create-update-h5s54\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:00 crc kubenswrapper[4890]: I0121 15:52:00.934066 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:01 crc kubenswrapper[4890]: I0121 15:52:01.129692 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nknm8"] Jan 21 15:52:01 crc kubenswrapper[4890]: W0121 15:52:01.144584 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod295fdf34_4879_4ba5_993a_424850ac8e46.slice/crio-2779deafe3ceb70d8ce7d0c3df3f2bb8f9ac58f9822216917904c574b1bcee0b WatchSource:0}: Error finding container 2779deafe3ceb70d8ce7d0c3df3f2bb8f9ac58f9822216917904c574b1bcee0b: Status 404 returned error can't find the container with id 2779deafe3ceb70d8ce7d0c3df3f2bb8f9ac58f9822216917904c574b1bcee0b Jan 21 15:52:01 crc kubenswrapper[4890]: I0121 15:52:01.181497 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nknm8" event={"ID":"295fdf34-4879-4ba5-993a-424850ac8e46","Type":"ContainerStarted","Data":"2779deafe3ceb70d8ce7d0c3df3f2bb8f9ac58f9822216917904c574b1bcee0b"} Jan 21 15:52:01 crc kubenswrapper[4890]: I0121 15:52:01.461875 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6c5-account-create-update-h5s54"] Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.092563 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zv979"] Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.101008 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zv979"] Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.191763 4890 generic.go:334] "Generic (PLEG): container finished" podID="295fdf34-4879-4ba5-993a-424850ac8e46" containerID="3a4def230c0d590d35ba17ed3b1707460c9afd259275943b38f2410a56c09911" exitCode=0 Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.191830 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nknm8" event={"ID":"295fdf34-4879-4ba5-993a-424850ac8e46","Type":"ContainerDied","Data":"3a4def230c0d590d35ba17ed3b1707460c9afd259275943b38f2410a56c09911"} Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.193696 4890 generic.go:334] "Generic (PLEG): container finished" podID="b8c48566-f878-4861-ac44-4e2ea1c107a4" containerID="f20373fd78902e12450bffce9ba15c92cdf73c9b0b39d7a93161e3fb5d6f7984" exitCode=0 Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.193723 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6c5-account-create-update-h5s54" event={"ID":"b8c48566-f878-4861-ac44-4e2ea1c107a4","Type":"ContainerDied","Data":"f20373fd78902e12450bffce9ba15c92cdf73c9b0b39d7a93161e3fb5d6f7984"} Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.193738 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6c5-account-create-update-h5s54" event={"ID":"b8c48566-f878-4861-ac44-4e2ea1c107a4","Type":"ContainerStarted","Data":"32e2818760144f9bf68d93fbf2828a76e2c12b3323b60fd7cb331122fef60ddd"} Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.200410 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j6fxt"] Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.201948 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.204022 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.215089 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j6fxt"] Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.309065 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad5e04e-3d58-4106-aef9-e589cfc46280-operator-scripts\") pod \"root-account-create-update-j6fxt\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.309227 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccb8n\" (UniqueName: \"kubernetes.io/projected/bad5e04e-3d58-4106-aef9-e589cfc46280-kube-api-access-ccb8n\") pod \"root-account-create-update-j6fxt\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.411531 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccb8n\" (UniqueName: \"kubernetes.io/projected/bad5e04e-3d58-4106-aef9-e589cfc46280-kube-api-access-ccb8n\") pod \"root-account-create-update-j6fxt\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.411722 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad5e04e-3d58-4106-aef9-e589cfc46280-operator-scripts\") pod \"root-account-create-update-j6fxt\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.413310 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad5e04e-3d58-4106-aef9-e589cfc46280-operator-scripts\") pod \"root-account-create-update-j6fxt\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.438812 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccb8n\" (UniqueName: \"kubernetes.io/projected/bad5e04e-3d58-4106-aef9-e589cfc46280-kube-api-access-ccb8n\") pod \"root-account-create-update-j6fxt\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.517422 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:02 crc kubenswrapper[4890]: I0121 15:52:02.968287 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j6fxt"] Jan 21 15:52:02 crc kubenswrapper[4890]: W0121 15:52:02.981918 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad5e04e_3d58_4106_aef9_e589cfc46280.slice/crio-53bf3b919c1b1be8647e1310ce11e6b10a3098784939d33db81abab689417755 WatchSource:0}: Error finding container 53bf3b919c1b1be8647e1310ce11e6b10a3098784939d33db81abab689417755: Status 404 returned error can't find the container with id 53bf3b919c1b1be8647e1310ce11e6b10a3098784939d33db81abab689417755 Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.201822 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j6fxt" event={"ID":"bad5e04e-3d58-4106-aef9-e589cfc46280","Type":"ContainerStarted","Data":"81ce73a5febd9e0238771e586ff5419d5d560723626eb34cc9b2725d3302a763"} Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.201873 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j6fxt" event={"ID":"bad5e04e-3d58-4106-aef9-e589cfc46280","Type":"ContainerStarted","Data":"53bf3b919c1b1be8647e1310ce11e6b10a3098784939d33db81abab689417755"} Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.224862 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j6fxt" podStartSLOduration=1.224841278 podStartE2EDuration="1.224841278s" podCreationTimestamp="2026-01-21 15:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:03.220139431 +0000 UTC m=+1205.581581850" watchObservedRunningTime="2026-01-21 15:52:03.224841278 +0000 UTC m=+1205.586283687" Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.655936 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.662503 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nknm8" Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.734308 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c48566-f878-4861-ac44-4e2ea1c107a4-operator-scripts\") pod \"b8c48566-f878-4861-ac44-4e2ea1c107a4\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.734456 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8skk\" (UniqueName: \"kubernetes.io/projected/295fdf34-4879-4ba5-993a-424850ac8e46-kube-api-access-h8skk\") pod \"295fdf34-4879-4ba5-993a-424850ac8e46\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.734613 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295fdf34-4879-4ba5-993a-424850ac8e46-operator-scripts\") pod \"295fdf34-4879-4ba5-993a-424850ac8e46\" (UID: \"295fdf34-4879-4ba5-993a-424850ac8e46\") " Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.734722 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lnzq\" (UniqueName: \"kubernetes.io/projected/b8c48566-f878-4861-ac44-4e2ea1c107a4-kube-api-access-8lnzq\") pod \"b8c48566-f878-4861-ac44-4e2ea1c107a4\" (UID: \"b8c48566-f878-4861-ac44-4e2ea1c107a4\") " Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.740378 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295fdf34-4879-4ba5-993a-424850ac8e46-kube-api-access-h8skk" (OuterVolumeSpecName: "kube-api-access-h8skk") pod "295fdf34-4879-4ba5-993a-424850ac8e46" (UID: "295fdf34-4879-4ba5-993a-424850ac8e46"). InnerVolumeSpecName "kube-api-access-h8skk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.741026 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c48566-f878-4861-ac44-4e2ea1c107a4-kube-api-access-8lnzq" (OuterVolumeSpecName: "kube-api-access-8lnzq") pod "b8c48566-f878-4861-ac44-4e2ea1c107a4" (UID: "b8c48566-f878-4861-ac44-4e2ea1c107a4"). InnerVolumeSpecName "kube-api-access-8lnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.836945 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lnzq\" (UniqueName: \"kubernetes.io/projected/b8c48566-f878-4861-ac44-4e2ea1c107a4-kube-api-access-8lnzq\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.836990 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8skk\" (UniqueName: \"kubernetes.io/projected/295fdf34-4879-4ba5-993a-424850ac8e46-kube-api-access-h8skk\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4890]: I0121 15:52:03.926489 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21ea68a-12a6-4264-ba83-48a08b34fc91" path="/var/lib/kubelet/pods/c21ea68a-12a6-4264-ba83-48a08b34fc91/volumes" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.009983 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c48566-f878-4861-ac44-4e2ea1c107a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8c48566-f878-4861-ac44-4e2ea1c107a4" (UID: "b8c48566-f878-4861-ac44-4e2ea1c107a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.010054 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/295fdf34-4879-4ba5-993a-424850ac8e46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "295fdf34-4879-4ba5-993a-424850ac8e46" (UID: "295fdf34-4879-4ba5-993a-424850ac8e46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.041302 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c48566-f878-4861-ac44-4e2ea1c107a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.041465 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/295fdf34-4879-4ba5-993a-424850ac8e46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.209977 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6c5-account-create-update-h5s54" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.209973 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6c5-account-create-update-h5s54" event={"ID":"b8c48566-f878-4861-ac44-4e2ea1c107a4","Type":"ContainerDied","Data":"32e2818760144f9bf68d93fbf2828a76e2c12b3323b60fd7cb331122fef60ddd"} Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.210099 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e2818760144f9bf68d93fbf2828a76e2c12b3323b60fd7cb331122fef60ddd" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.214066 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nknm8" event={"ID":"295fdf34-4879-4ba5-993a-424850ac8e46","Type":"ContainerDied","Data":"2779deafe3ceb70d8ce7d0c3df3f2bb8f9ac58f9822216917904c574b1bcee0b"} Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.214100 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2779deafe3ceb70d8ce7d0c3df3f2bb8f9ac58f9822216917904c574b1bcee0b" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.214136 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nknm8" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.551877 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:52:04 crc kubenswrapper[4890]: E0121 15:52:04.552088 4890 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 15:52:04 crc kubenswrapper[4890]: E0121 15:52:04.552426 4890 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 15:52:04 crc kubenswrapper[4890]: E0121 15:52:04.552493 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift podName:e7d46fba-02db-42e1-a916-1b2528bbdd52 nodeName:}" failed. No retries permitted until 2026-01-21 15:52:20.55247013 +0000 UTC m=+1222.913912549 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift") pod "swift-storage-0" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52") : configmap "swift-ring-files" not found Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.881168 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-v5vck"] Jan 21 15:52:04 crc kubenswrapper[4890]: E0121 15:52:04.881624 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c48566-f878-4861-ac44-4e2ea1c107a4" containerName="mariadb-account-create-update" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.881646 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c48566-f878-4861-ac44-4e2ea1c107a4" containerName="mariadb-account-create-update" Jan 21 15:52:04 crc kubenswrapper[4890]: E0121 15:52:04.881660 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295fdf34-4879-4ba5-993a-424850ac8e46" containerName="mariadb-database-create" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.881670 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="295fdf34-4879-4ba5-993a-424850ac8e46" containerName="mariadb-database-create" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.881942 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c48566-f878-4861-ac44-4e2ea1c107a4" containerName="mariadb-account-create-update" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.881974 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="295fdf34-4879-4ba5-993a-424850ac8e46" containerName="mariadb-database-create" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.882748 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.894486 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v5vck"] Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.957987 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27bwl\" (UniqueName: \"kubernetes.io/projected/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-kube-api-access-27bwl\") pod \"keystone-db-create-v5vck\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.958151 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-operator-scripts\") pod \"keystone-db-create-v5vck\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.987641 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-358a-account-create-update-6m47f"] Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.988688 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.991301 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 15:52:04 crc kubenswrapper[4890]: I0121 15:52:04.998887 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-358a-account-create-update-6m47f"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.059464 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac05403-ccde-41a4-9312-7c536af1825d-operator-scripts\") pod \"keystone-358a-account-create-update-6m47f\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.059582 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27bwl\" (UniqueName: \"kubernetes.io/projected/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-kube-api-access-27bwl\") pod \"keystone-db-create-v5vck\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.059791 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-operator-scripts\") pod \"keystone-db-create-v5vck\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.059836 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqr5\" (UniqueName: \"kubernetes.io/projected/3ac05403-ccde-41a4-9312-7c536af1825d-kube-api-access-vhqr5\") pod \"keystone-358a-account-create-update-6m47f\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.061782 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-operator-scripts\") pod \"keystone-db-create-v5vck\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.084977 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27bwl\" (UniqueName: \"kubernetes.io/projected/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-kube-api-access-27bwl\") pod \"keystone-db-create-v5vck\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.161628 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqr5\" (UniqueName: \"kubernetes.io/projected/3ac05403-ccde-41a4-9312-7c536af1825d-kube-api-access-vhqr5\") pod \"keystone-358a-account-create-update-6m47f\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.161696 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac05403-ccde-41a4-9312-7c536af1825d-operator-scripts\") pod \"keystone-358a-account-create-update-6m47f\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.162736 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac05403-ccde-41a4-9312-7c536af1825d-operator-scripts\") pod \"keystone-358a-account-create-update-6m47f\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.177717 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqr5\" (UniqueName: \"kubernetes.io/projected/3ac05403-ccde-41a4-9312-7c536af1825d-kube-api-access-vhqr5\") pod \"keystone-358a-account-create-update-6m47f\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.201947 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.232602 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-s9xcj"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.236596 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.251925 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s9xcj"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.308514 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.357517 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1136-account-create-update-w54vk"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.359175 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.361267 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.364863 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7g99\" (UniqueName: \"kubernetes.io/projected/637e1cbc-a769-4deb-926c-ec36b9b6dc61-kube-api-access-h7g99\") pod \"placement-db-create-s9xcj\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.364968 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637e1cbc-a769-4deb-926c-ec36b9b6dc61-operator-scripts\") pod \"placement-db-create-s9xcj\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.380407 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1136-account-create-update-w54vk"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.466141 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7g99\" (UniqueName: \"kubernetes.io/projected/637e1cbc-a769-4deb-926c-ec36b9b6dc61-kube-api-access-h7g99\") pod \"placement-db-create-s9xcj\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.466221 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqt6f\" (UniqueName: \"kubernetes.io/projected/c40c65d3-b792-4fba-b282-4f1943d4f71f-kube-api-access-cqt6f\") pod \"placement-1136-account-create-update-w54vk\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.466284 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40c65d3-b792-4fba-b282-4f1943d4f71f-operator-scripts\") pod \"placement-1136-account-create-update-w54vk\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.466310 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637e1cbc-a769-4deb-926c-ec36b9b6dc61-operator-scripts\") pod \"placement-db-create-s9xcj\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.466958 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637e1cbc-a769-4deb-926c-ec36b9b6dc61-operator-scripts\") pod \"placement-db-create-s9xcj\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.491485 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7g99\" (UniqueName: \"kubernetes.io/projected/637e1cbc-a769-4deb-926c-ec36b9b6dc61-kube-api-access-h7g99\") pod \"placement-db-create-s9xcj\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.567811 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqt6f\" (UniqueName: \"kubernetes.io/projected/c40c65d3-b792-4fba-b282-4f1943d4f71f-kube-api-access-cqt6f\") pod \"placement-1136-account-create-update-w54vk\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.568265 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40c65d3-b792-4fba-b282-4f1943d4f71f-operator-scripts\") pod \"placement-1136-account-create-update-w54vk\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.568966 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40c65d3-b792-4fba-b282-4f1943d4f71f-operator-scripts\") pod \"placement-1136-account-create-update-w54vk\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.585292 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqt6f\" (UniqueName: \"kubernetes.io/projected/c40c65d3-b792-4fba-b282-4f1943d4f71f-kube-api-access-cqt6f\") pod \"placement-1136-account-create-update-w54vk\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.634662 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.693594 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v5vck"] Jan 21 15:52:05 crc kubenswrapper[4890]: W0121 15:52:05.696619 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fbfebc6_e25e_47f6_97d1_dd9d8c0cd944.slice/crio-762cd40d35d9172ccc7c668a9949cc5914981311e7a08828c91f03ecea0971e8 WatchSource:0}: Error finding container 762cd40d35d9172ccc7c668a9949cc5914981311e7a08828c91f03ecea0971e8: Status 404 returned error can't find the container with id 762cd40d35d9172ccc7c668a9949cc5914981311e7a08828c91f03ecea0971e8 Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.696909 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.871457 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-358a-account-create-update-6m47f"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.877176 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-skk7h"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.878545 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.881654 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kklp5" Jan 21 15:52:05 crc kubenswrapper[4890]: W0121 15:52:05.881911 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ac05403_ccde_41a4_9312_7c536af1825d.slice/crio-657bba88b07d981a394723d0287379280e72d1928b3a8932a7d8bb8fe04cdbf2 WatchSource:0}: Error finding container 657bba88b07d981a394723d0287379280e72d1928b3a8932a7d8bb8fe04cdbf2: Status 404 returned error can't find the container with id 657bba88b07d981a394723d0287379280e72d1928b3a8932a7d8bb8fe04cdbf2 Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.882327 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.911822 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-skk7h"] Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.983095 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfdvh\" (UniqueName: \"kubernetes.io/projected/2f759e91-6dab-4432-9431-ce312918c7e7-kube-api-access-xfdvh\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.983183 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-combined-ca-bundle\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.983542 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-config-data\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:05 crc kubenswrapper[4890]: I0121 15:52:05.983905 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-db-sync-config-data\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.085317 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfdvh\" (UniqueName: \"kubernetes.io/projected/2f759e91-6dab-4432-9431-ce312918c7e7-kube-api-access-xfdvh\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.085481 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-combined-ca-bundle\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.085567 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-config-data\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.085661 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-db-sync-config-data\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.091535 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-combined-ca-bundle\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.091585 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-config-data\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.091769 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-db-sync-config-data\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.103321 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfdvh\" (UniqueName: \"kubernetes.io/projected/2f759e91-6dab-4432-9431-ce312918c7e7-kube-api-access-xfdvh\") pod \"glance-db-sync-skk7h\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.148486 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s9xcj"] Jan 21 15:52:06 crc kubenswrapper[4890]: W0121 15:52:06.152311 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod637e1cbc_a769_4deb_926c_ec36b9b6dc61.slice/crio-7f76b9419203170725accfed4d6f57a93227f37740c928cf2c8ff7356732d122 WatchSource:0}: Error finding container 7f76b9419203170725accfed4d6f57a93227f37740c928cf2c8ff7356732d122: Status 404 returned error can't find the container with id 7f76b9419203170725accfed4d6f57a93227f37740c928cf2c8ff7356732d122 Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.206808 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.240079 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-358a-account-create-update-6m47f" event={"ID":"3ac05403-ccde-41a4-9312-7c536af1825d","Type":"ContainerStarted","Data":"657bba88b07d981a394723d0287379280e72d1928b3a8932a7d8bb8fe04cdbf2"} Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.241209 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s9xcj" event={"ID":"637e1cbc-a769-4deb-926c-ec36b9b6dc61","Type":"ContainerStarted","Data":"7f76b9419203170725accfed4d6f57a93227f37740c928cf2c8ff7356732d122"} Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.244314 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v5vck" event={"ID":"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944","Type":"ContainerStarted","Data":"762cd40d35d9172ccc7c668a9949cc5914981311e7a08828c91f03ecea0971e8"} Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.246883 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1136-account-create-update-w54vk"] Jan 21 15:52:06 crc kubenswrapper[4890]: W0121 15:52:06.253966 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40c65d3_b792_4fba_b282_4f1943d4f71f.slice/crio-a82edafcca769a447737d3cc91903a6227f66d4170f0602ee0fd6f92171cd12c WatchSource:0}: Error finding container a82edafcca769a447737d3cc91903a6227f66d4170f0602ee0fd6f92171cd12c: Status 404 returned error can't find the container with id a82edafcca769a447737d3cc91903a6227f66d4170f0602ee0fd6f92171cd12c Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.812789 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-skk7h"] Jan 21 15:52:06 crc kubenswrapper[4890]: I0121 15:52:06.979415 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.253878 4890 generic.go:334] "Generic (PLEG): container finished" podID="637e1cbc-a769-4deb-926c-ec36b9b6dc61" containerID="66d6987ba827025ccf444c0f90c8245315d050819facad259928746279545f54" exitCode=0 Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.254104 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s9xcj" event={"ID":"637e1cbc-a769-4deb-926c-ec36b9b6dc61","Type":"ContainerDied","Data":"66d6987ba827025ccf444c0f90c8245315d050819facad259928746279545f54"} Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.256764 4890 generic.go:334] "Generic (PLEG): container finished" podID="0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944" containerID="2e0931f58ca1a5d447ce95af0d130834497a2a72ee0697c47fe09bd70c3de541" exitCode=0 Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.256840 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v5vck" event={"ID":"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944","Type":"ContainerDied","Data":"2e0931f58ca1a5d447ce95af0d130834497a2a72ee0697c47fe09bd70c3de541"} Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.265225 4890 generic.go:334] "Generic (PLEG): container finished" podID="3ac05403-ccde-41a4-9312-7c536af1825d" containerID="b35ec4c89779e5eb7c02e3bb914d9b1910b675eef13a014bb8895f0526ae4e60" exitCode=0 Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.265328 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-358a-account-create-update-6m47f" event={"ID":"3ac05403-ccde-41a4-9312-7c536af1825d","Type":"ContainerDied","Data":"b35ec4c89779e5eb7c02e3bb914d9b1910b675eef13a014bb8895f0526ae4e60"} Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.267694 4890 generic.go:334] "Generic (PLEG): container finished" podID="c40c65d3-b792-4fba-b282-4f1943d4f71f" containerID="9c6bd3901ab21eb3a6a896f88c75422f37111da6dc7a9f7afacad059eda2c587" exitCode=0 Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.267777 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1136-account-create-update-w54vk" event={"ID":"c40c65d3-b792-4fba-b282-4f1943d4f71f","Type":"ContainerDied","Data":"9c6bd3901ab21eb3a6a896f88c75422f37111da6dc7a9f7afacad059eda2c587"} Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.267807 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1136-account-create-update-w54vk" event={"ID":"c40c65d3-b792-4fba-b282-4f1943d4f71f","Type":"ContainerStarted","Data":"a82edafcca769a447737d3cc91903a6227f66d4170f0602ee0fd6f92171cd12c"} Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.269895 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-skk7h" event={"ID":"2f759e91-6dab-4432-9431-ce312918c7e7","Type":"ContainerStarted","Data":"633e01db38c452fde46acd1d5ba44b00225546bfdd35ff0b1f39e708b02f0213"} Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.271267 4890 generic.go:334] "Generic (PLEG): container finished" podID="bad5e04e-3d58-4106-aef9-e589cfc46280" containerID="81ce73a5febd9e0238771e586ff5419d5d560723626eb34cc9b2725d3302a763" exitCode=0 Jan 21 15:52:07 crc kubenswrapper[4890]: I0121 15:52:07.271326 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j6fxt" event={"ID":"bad5e04e-3d58-4106-aef9-e589cfc46280","Type":"ContainerDied","Data":"81ce73a5febd9e0238771e586ff5419d5d560723626eb34cc9b2725d3302a763"} Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.284484 4890 generic.go:334] "Generic (PLEG): container finished" podID="7a5da046-eade-47f6-91bc-2f25e44a4c85" containerID="a2062123071fef8060317420f2552a337427a5f0cbff7ebea4e49477ef056fa3" exitCode=0 Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.284845 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zk2ll" event={"ID":"7a5da046-eade-47f6-91bc-2f25e44a4c85","Type":"ContainerDied","Data":"a2062123071fef8060317420f2552a337427a5f0cbff7ebea4e49477ef056fa3"} Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.730424 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.844908 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40c65d3-b792-4fba-b282-4f1943d4f71f-operator-scripts\") pod \"c40c65d3-b792-4fba-b282-4f1943d4f71f\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.845034 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqt6f\" (UniqueName: \"kubernetes.io/projected/c40c65d3-b792-4fba-b282-4f1943d4f71f-kube-api-access-cqt6f\") pod \"c40c65d3-b792-4fba-b282-4f1943d4f71f\" (UID: \"c40c65d3-b792-4fba-b282-4f1943d4f71f\") " Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.845823 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40c65d3-b792-4fba-b282-4f1943d4f71f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c40c65d3-b792-4fba-b282-4f1943d4f71f" (UID: "c40c65d3-b792-4fba-b282-4f1943d4f71f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.853683 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40c65d3-b792-4fba-b282-4f1943d4f71f-kube-api-access-cqt6f" (OuterVolumeSpecName: "kube-api-access-cqt6f") pod "c40c65d3-b792-4fba-b282-4f1943d4f71f" (UID: "c40c65d3-b792-4fba-b282-4f1943d4f71f"). InnerVolumeSpecName "kube-api-access-cqt6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.935209 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.948710 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40c65d3-b792-4fba-b282-4f1943d4f71f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.949169 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqt6f\" (UniqueName: \"kubernetes.io/projected/c40c65d3-b792-4fba-b282-4f1943d4f71f-kube-api-access-cqt6f\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.961910 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.970289 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:08 crc kubenswrapper[4890]: I0121 15:52:08.986178 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.050341 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27bwl\" (UniqueName: \"kubernetes.io/projected/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-kube-api-access-27bwl\") pod \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.050453 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad5e04e-3d58-4106-aef9-e589cfc46280-operator-scripts\") pod \"bad5e04e-3d58-4106-aef9-e589cfc46280\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.050512 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac05403-ccde-41a4-9312-7c536af1825d-operator-scripts\") pod \"3ac05403-ccde-41a4-9312-7c536af1825d\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.050579 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-operator-scripts\") pod \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\" (UID: \"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.050613 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccb8n\" (UniqueName: \"kubernetes.io/projected/bad5e04e-3d58-4106-aef9-e589cfc46280-kube-api-access-ccb8n\") pod \"bad5e04e-3d58-4106-aef9-e589cfc46280\" (UID: \"bad5e04e-3d58-4106-aef9-e589cfc46280\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.050632 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhqr5\" (UniqueName: \"kubernetes.io/projected/3ac05403-ccde-41a4-9312-7c536af1825d-kube-api-access-vhqr5\") pod \"3ac05403-ccde-41a4-9312-7c536af1825d\" (UID: \"3ac05403-ccde-41a4-9312-7c536af1825d\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.052866 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac05403-ccde-41a4-9312-7c536af1825d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ac05403-ccde-41a4-9312-7c536af1825d" (UID: "3ac05403-ccde-41a4-9312-7c536af1825d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.053041 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad5e04e-3d58-4106-aef9-e589cfc46280-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bad5e04e-3d58-4106-aef9-e589cfc46280" (UID: "bad5e04e-3d58-4106-aef9-e589cfc46280"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.053326 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944" (UID: "0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.056299 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-kube-api-access-27bwl" (OuterVolumeSpecName: "kube-api-access-27bwl") pod "0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944" (UID: "0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944"). InnerVolumeSpecName "kube-api-access-27bwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.056400 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad5e04e-3d58-4106-aef9-e589cfc46280-kube-api-access-ccb8n" (OuterVolumeSpecName: "kube-api-access-ccb8n") pod "bad5e04e-3d58-4106-aef9-e589cfc46280" (UID: "bad5e04e-3d58-4106-aef9-e589cfc46280"). InnerVolumeSpecName "kube-api-access-ccb8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.057052 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac05403-ccde-41a4-9312-7c536af1825d-kube-api-access-vhqr5" (OuterVolumeSpecName: "kube-api-access-vhqr5") pod "3ac05403-ccde-41a4-9312-7c536af1825d" (UID: "3ac05403-ccde-41a4-9312-7c536af1825d"). InnerVolumeSpecName "kube-api-access-vhqr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.152099 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637e1cbc-a769-4deb-926c-ec36b9b6dc61-operator-scripts\") pod \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.152200 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7g99\" (UniqueName: \"kubernetes.io/projected/637e1cbc-a769-4deb-926c-ec36b9b6dc61-kube-api-access-h7g99\") pod \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\" (UID: \"637e1cbc-a769-4deb-926c-ec36b9b6dc61\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.152568 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/637e1cbc-a769-4deb-926c-ec36b9b6dc61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "637e1cbc-a769-4deb-926c-ec36b9b6dc61" (UID: "637e1cbc-a769-4deb-926c-ec36b9b6dc61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.153305 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27bwl\" (UniqueName: \"kubernetes.io/projected/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-kube-api-access-27bwl\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.153327 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637e1cbc-a769-4deb-926c-ec36b9b6dc61-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.153336 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad5e04e-3d58-4106-aef9-e589cfc46280-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.153364 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac05403-ccde-41a4-9312-7c536af1825d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.153374 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.153383 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccb8n\" (UniqueName: \"kubernetes.io/projected/bad5e04e-3d58-4106-aef9-e589cfc46280-kube-api-access-ccb8n\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.153391 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhqr5\" (UniqueName: \"kubernetes.io/projected/3ac05403-ccde-41a4-9312-7c536af1825d-kube-api-access-vhqr5\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.155291 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637e1cbc-a769-4deb-926c-ec36b9b6dc61-kube-api-access-h7g99" (OuterVolumeSpecName: "kube-api-access-h7g99") pod "637e1cbc-a769-4deb-926c-ec36b9b6dc61" (UID: "637e1cbc-a769-4deb-926c-ec36b9b6dc61"). InnerVolumeSpecName "kube-api-access-h7g99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.255281 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7g99\" (UniqueName: \"kubernetes.io/projected/637e1cbc-a769-4deb-926c-ec36b9b6dc61-kube-api-access-h7g99\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.298906 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-358a-account-create-update-6m47f" event={"ID":"3ac05403-ccde-41a4-9312-7c536af1825d","Type":"ContainerDied","Data":"657bba88b07d981a394723d0287379280e72d1928b3a8932a7d8bb8fe04cdbf2"} Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.298951 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657bba88b07d981a394723d0287379280e72d1928b3a8932a7d8bb8fe04cdbf2" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.298978 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-6m47f" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.301046 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1136-account-create-update-w54vk" event={"ID":"c40c65d3-b792-4fba-b282-4f1943d4f71f","Type":"ContainerDied","Data":"a82edafcca769a447737d3cc91903a6227f66d4170f0602ee0fd6f92171cd12c"} Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.301069 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a82edafcca769a447737d3cc91903a6227f66d4170f0602ee0fd6f92171cd12c" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.301074 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-w54vk" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.322434 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j6fxt" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.322533 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j6fxt" event={"ID":"bad5e04e-3d58-4106-aef9-e589cfc46280","Type":"ContainerDied","Data":"53bf3b919c1b1be8647e1310ce11e6b10a3098784939d33db81abab689417755"} Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.322581 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53bf3b919c1b1be8647e1310ce11e6b10a3098784939d33db81abab689417755" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.327686 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9xcj" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.327739 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s9xcj" event={"ID":"637e1cbc-a769-4deb-926c-ec36b9b6dc61","Type":"ContainerDied","Data":"7f76b9419203170725accfed4d6f57a93227f37740c928cf2c8ff7356732d122"} Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.327775 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f76b9419203170725accfed4d6f57a93227f37740c928cf2c8ff7356732d122" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.329835 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v5vck" event={"ID":"0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944","Type":"ContainerDied","Data":"762cd40d35d9172ccc7c668a9949cc5914981311e7a08828c91f03ecea0971e8"} Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.329885 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v5vck" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.329929 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="762cd40d35d9172ccc7c668a9949cc5914981311e7a08828c91f03ecea0971e8" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.603334 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.762612 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-ring-data-devices\") pod \"7a5da046-eade-47f6-91bc-2f25e44a4c85\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.762678 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-scripts\") pod \"7a5da046-eade-47f6-91bc-2f25e44a4c85\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.762742 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a5da046-eade-47f6-91bc-2f25e44a4c85-etc-swift\") pod \"7a5da046-eade-47f6-91bc-2f25e44a4c85\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.762807 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-swiftconf\") pod \"7a5da046-eade-47f6-91bc-2f25e44a4c85\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.762826 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-dispersionconf\") pod \"7a5da046-eade-47f6-91bc-2f25e44a4c85\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.762861 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-combined-ca-bundle\") pod \"7a5da046-eade-47f6-91bc-2f25e44a4c85\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.762932 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfbch\" (UniqueName: \"kubernetes.io/projected/7a5da046-eade-47f6-91bc-2f25e44a4c85-kube-api-access-qfbch\") pod \"7a5da046-eade-47f6-91bc-2f25e44a4c85\" (UID: \"7a5da046-eade-47f6-91bc-2f25e44a4c85\") " Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.763516 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7a5da046-eade-47f6-91bc-2f25e44a4c85" (UID: "7a5da046-eade-47f6-91bc-2f25e44a4c85"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.764640 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a5da046-eade-47f6-91bc-2f25e44a4c85-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7a5da046-eade-47f6-91bc-2f25e44a4c85" (UID: "7a5da046-eade-47f6-91bc-2f25e44a4c85"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.777862 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5da046-eade-47f6-91bc-2f25e44a4c85-kube-api-access-qfbch" (OuterVolumeSpecName: "kube-api-access-qfbch") pod "7a5da046-eade-47f6-91bc-2f25e44a4c85" (UID: "7a5da046-eade-47f6-91bc-2f25e44a4c85"). InnerVolumeSpecName "kube-api-access-qfbch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.781098 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7a5da046-eade-47f6-91bc-2f25e44a4c85" (UID: "7a5da046-eade-47f6-91bc-2f25e44a4c85"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.789640 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7a5da046-eade-47f6-91bc-2f25e44a4c85" (UID: "7a5da046-eade-47f6-91bc-2f25e44a4c85"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.792832 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a5da046-eade-47f6-91bc-2f25e44a4c85" (UID: "7a5da046-eade-47f6-91bc-2f25e44a4c85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.796869 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-scripts" (OuterVolumeSpecName: "scripts") pod "7a5da046-eade-47f6-91bc-2f25e44a4c85" (UID: "7a5da046-eade-47f6-91bc-2f25e44a4c85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.865458 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.865511 4890 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a5da046-eade-47f6-91bc-2f25e44a4c85-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.865524 4890 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.865534 4890 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.865551 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5da046-eade-47f6-91bc-2f25e44a4c85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.865568 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfbch\" (UniqueName: \"kubernetes.io/projected/7a5da046-eade-47f6-91bc-2f25e44a4c85-kube-api-access-qfbch\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:09 crc kubenswrapper[4890]: I0121 15:52:09.865580 4890 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a5da046-eade-47f6-91bc-2f25e44a4c85-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.283192 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pmrch" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerName="ovn-controller" probeResult="failure" output=< Jan 21 15:52:10 crc kubenswrapper[4890]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 15:52:10 crc kubenswrapper[4890]: > Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.312861 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.314073 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.344979 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zk2ll" event={"ID":"7a5da046-eade-47f6-91bc-2f25e44a4c85","Type":"ContainerDied","Data":"7daaeadc52af9b3a929dc2619df8d899f09e0ff55c69edbc5483882867b6426e"} Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.345020 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zk2ll" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.345030 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7daaeadc52af9b3a929dc2619df8d899f09e0ff55c69edbc5483882867b6426e" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.534514 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pmrch-config-vj6jz"] Jan 21 15:52:10 crc kubenswrapper[4890]: E0121 15:52:10.535103 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637e1cbc-a769-4deb-926c-ec36b9b6dc61" containerName="mariadb-database-create" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.535168 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="637e1cbc-a769-4deb-926c-ec36b9b6dc61" containerName="mariadb-database-create" Jan 21 15:52:10 crc kubenswrapper[4890]: E0121 15:52:10.535240 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac05403-ccde-41a4-9312-7c536af1825d" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.535297 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac05403-ccde-41a4-9312-7c536af1825d" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: E0121 15:52:10.535364 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5da046-eade-47f6-91bc-2f25e44a4c85" containerName="swift-ring-rebalance" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.535426 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5da046-eade-47f6-91bc-2f25e44a4c85" containerName="swift-ring-rebalance" Jan 21 15:52:10 crc kubenswrapper[4890]: E0121 15:52:10.535486 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad5e04e-3d58-4106-aef9-e589cfc46280" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.535541 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad5e04e-3d58-4106-aef9-e589cfc46280" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: E0121 15:52:10.535602 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944" containerName="mariadb-database-create" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.535657 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944" containerName="mariadb-database-create" Jan 21 15:52:10 crc kubenswrapper[4890]: E0121 15:52:10.535717 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40c65d3-b792-4fba-b282-4f1943d4f71f" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.535764 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40c65d3-b792-4fba-b282-4f1943d4f71f" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.535958 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad5e04e-3d58-4106-aef9-e589cfc46280" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.536025 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944" containerName="mariadb-database-create" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.536108 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac05403-ccde-41a4-9312-7c536af1825d" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.536287 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40c65d3-b792-4fba-b282-4f1943d4f71f" containerName="mariadb-account-create-update" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.536393 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5da046-eade-47f6-91bc-2f25e44a4c85" containerName="swift-ring-rebalance" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.536476 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="637e1cbc-a769-4deb-926c-ec36b9b6dc61" containerName="mariadb-database-create" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.537244 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.539756 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.555004 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pmrch-config-vj6jz"] Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.681759 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run-ovn\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.682087 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-log-ovn\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.682215 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqdk\" (UniqueName: \"kubernetes.io/projected/202085c6-8a25-4583-8792-7da263376f95-kube-api-access-wcqdk\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.682315 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-additional-scripts\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.682450 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.682532 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-scripts\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.784475 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-additional-scripts\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.784592 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.784619 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-scripts\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.784722 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run-ovn\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.784786 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-log-ovn\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.784806 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqdk\" (UniqueName: \"kubernetes.io/projected/202085c6-8a25-4583-8792-7da263376f95-kube-api-access-wcqdk\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.785544 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.785635 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run-ovn\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.785805 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-log-ovn\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.790991 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-additional-scripts\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.792131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-scripts\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.808119 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqdk\" (UniqueName: \"kubernetes.io/projected/202085c6-8a25-4583-8792-7da263376f95-kube-api-access-wcqdk\") pod \"ovn-controller-pmrch-config-vj6jz\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:10 crc kubenswrapper[4890]: I0121 15:52:10.856288 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:11 crc kubenswrapper[4890]: I0121 15:52:11.387318 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pmrch-config-vj6jz"] Jan 21 15:52:11 crc kubenswrapper[4890]: W0121 15:52:11.411467 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod202085c6_8a25_4583_8792_7da263376f95.slice/crio-4819c74682eaabaa1b1325af9dd9e431725e3312321f3e12c65e6d0a42902818 WatchSource:0}: Error finding container 4819c74682eaabaa1b1325af9dd9e431725e3312321f3e12c65e6d0a42902818: Status 404 returned error can't find the container with id 4819c74682eaabaa1b1325af9dd9e431725e3312321f3e12c65e6d0a42902818 Jan 21 15:52:12 crc kubenswrapper[4890]: I0121 15:52:12.360975 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch-config-vj6jz" event={"ID":"202085c6-8a25-4583-8792-7da263376f95","Type":"ContainerStarted","Data":"4819c74682eaabaa1b1325af9dd9e431725e3312321f3e12c65e6d0a42902818"} Jan 21 15:52:13 crc kubenswrapper[4890]: I0121 15:52:13.372793 4890 generic.go:334] "Generic (PLEG): container finished" podID="202085c6-8a25-4583-8792-7da263376f95" containerID="dd79ac2382ba4799fcbadddf89c4782f72ca122863877ab13c4b968f86b00abd" exitCode=0 Jan 21 15:52:13 crc kubenswrapper[4890]: I0121 15:52:13.372842 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch-config-vj6jz" event={"ID":"202085c6-8a25-4583-8792-7da263376f95","Type":"ContainerDied","Data":"dd79ac2382ba4799fcbadddf89c4782f72ca122863877ab13c4b968f86b00abd"} Jan 21 15:52:13 crc kubenswrapper[4890]: I0121 15:52:13.882275 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j6fxt"] Jan 21 15:52:13 crc kubenswrapper[4890]: I0121 15:52:13.889186 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j6fxt"] Jan 21 15:52:13 crc kubenswrapper[4890]: I0121 15:52:13.922566 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad5e04e-3d58-4106-aef9-e589cfc46280" path="/var/lib/kubelet/pods/bad5e04e-3d58-4106-aef9-e589cfc46280/volumes" Jan 21 15:52:15 crc kubenswrapper[4890]: I0121 15:52:15.283014 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pmrch" Jan 21 15:52:17 crc kubenswrapper[4890]: I0121 15:52:17.406732 4890 generic.go:334] "Generic (PLEG): container finished" podID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerID="33df5a7bef461b044f5948e6d82a92f50152c168d4b306d9e90252ca8c70cd02" exitCode=0 Jan 21 15:52:17 crc kubenswrapper[4890]: I0121 15:52:17.406781 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9bb9aa52-0895-418e-8e0b-d922948e85a7","Type":"ContainerDied","Data":"33df5a7bef461b044f5948e6d82a92f50152c168d4b306d9e90252ca8c70cd02"} Jan 21 15:52:17 crc kubenswrapper[4890]: I0121 15:52:17.412404 4890 generic.go:334] "Generic (PLEG): container finished" podID="caae7093-b594-47fb-b863-38d825f0048d" containerID="f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66" exitCode=0 Jan 21 15:52:17 crc kubenswrapper[4890]: I0121 15:52:17.412445 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caae7093-b594-47fb-b863-38d825f0048d","Type":"ContainerDied","Data":"f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66"} Jan 21 15:52:18 crc kubenswrapper[4890]: I0121 15:52:18.762390 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:52:18 crc kubenswrapper[4890]: I0121 15:52:18.762780 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:52:18 crc kubenswrapper[4890]: I0121 15:52:18.903939 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f747l"] Jan 21 15:52:18 crc kubenswrapper[4890]: I0121 15:52:18.907819 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f747l" Jan 21 15:52:18 crc kubenswrapper[4890]: I0121 15:52:18.911273 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:52:18 crc kubenswrapper[4890]: I0121 15:52:18.917815 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f747l"] Jan 21 15:52:19 crc kubenswrapper[4890]: I0121 15:52:19.033585 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xkpf\" (UniqueName: \"kubernetes.io/projected/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-kube-api-access-5xkpf\") pod \"root-account-create-update-f747l\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " pod="openstack/root-account-create-update-f747l" Jan 21 15:52:19 crc kubenswrapper[4890]: I0121 15:52:19.033985 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-operator-scripts\") pod \"root-account-create-update-f747l\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " pod="openstack/root-account-create-update-f747l" Jan 21 15:52:19 crc kubenswrapper[4890]: I0121 15:52:19.136026 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xkpf\" (UniqueName: \"kubernetes.io/projected/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-kube-api-access-5xkpf\") pod \"root-account-create-update-f747l\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " pod="openstack/root-account-create-update-f747l" Jan 21 15:52:19 crc kubenswrapper[4890]: I0121 15:52:19.136201 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-operator-scripts\") pod \"root-account-create-update-f747l\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " pod="openstack/root-account-create-update-f747l" Jan 21 15:52:19 crc kubenswrapper[4890]: I0121 15:52:19.136943 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-operator-scripts\") pod \"root-account-create-update-f747l\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " pod="openstack/root-account-create-update-f747l" Jan 21 15:52:19 crc kubenswrapper[4890]: I0121 15:52:19.159010 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xkpf\" (UniqueName: \"kubernetes.io/projected/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-kube-api-access-5xkpf\") pod \"root-account-create-update-f747l\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " pod="openstack/root-account-create-update-f747l" Jan 21 15:52:19 crc kubenswrapper[4890]: I0121 15:52:19.225963 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f747l" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.439091 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch-config-vj6jz" event={"ID":"202085c6-8a25-4583-8792-7da263376f95","Type":"ContainerDied","Data":"4819c74682eaabaa1b1325af9dd9e431725e3312321f3e12c65e6d0a42902818"} Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.439401 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4819c74682eaabaa1b1325af9dd9e431725e3312321f3e12c65e6d0a42902818" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.533222 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.560176 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.570513 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"swift-storage-0\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " pod="openstack/swift-storage-0" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.663311 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-log-ovn\") pod \"202085c6-8a25-4583-8792-7da263376f95\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.663436 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-additional-scripts\") pod \"202085c6-8a25-4583-8792-7da263376f95\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.663557 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run\") pod \"202085c6-8a25-4583-8792-7da263376f95\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.663590 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcqdk\" (UniqueName: \"kubernetes.io/projected/202085c6-8a25-4583-8792-7da263376f95-kube-api-access-wcqdk\") pod \"202085c6-8a25-4583-8792-7da263376f95\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.663612 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-scripts\") pod \"202085c6-8a25-4583-8792-7da263376f95\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.663629 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run-ovn\") pod \"202085c6-8a25-4583-8792-7da263376f95\" (UID: \"202085c6-8a25-4583-8792-7da263376f95\") " Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.664015 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "202085c6-8a25-4583-8792-7da263376f95" (UID: "202085c6-8a25-4583-8792-7da263376f95"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.663994 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "202085c6-8a25-4583-8792-7da263376f95" (UID: "202085c6-8a25-4583-8792-7da263376f95"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.664421 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run" (OuterVolumeSpecName: "var-run") pod "202085c6-8a25-4583-8792-7da263376f95" (UID: "202085c6-8a25-4583-8792-7da263376f95"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.664696 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "202085c6-8a25-4583-8792-7da263376f95" (UID: "202085c6-8a25-4583-8792-7da263376f95"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.666563 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-scripts" (OuterVolumeSpecName: "scripts") pod "202085c6-8a25-4583-8792-7da263376f95" (UID: "202085c6-8a25-4583-8792-7da263376f95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.668535 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202085c6-8a25-4583-8792-7da263376f95-kube-api-access-wcqdk" (OuterVolumeSpecName: "kube-api-access-wcqdk") pod "202085c6-8a25-4583-8792-7da263376f95" (UID: "202085c6-8a25-4583-8792-7da263376f95"). InnerVolumeSpecName "kube-api-access-wcqdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.746181 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.764991 4890 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.765020 4890 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.765031 4890 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.765040 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcqdk\" (UniqueName: \"kubernetes.io/projected/202085c6-8a25-4583-8792-7da263376f95-kube-api-access-wcqdk\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.765048 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/202085c6-8a25-4583-8792-7da263376f95-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.765056 4890 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/202085c6-8a25-4583-8792-7da263376f95-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:20 crc kubenswrapper[4890]: I0121 15:52:20.872994 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f747l"] Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.291841 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 15:52:21 crc kubenswrapper[4890]: W0121 15:52:21.299229 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7d46fba_02db_42e1_a916_1b2528bbdd52.slice/crio-a32af2399b775ce1308942e9ee1c941e38291012df640443686b78732e95ac5f WatchSource:0}: Error finding container a32af2399b775ce1308942e9ee1c941e38291012df640443686b78732e95ac5f: Status 404 returned error can't find the container with id a32af2399b775ce1308942e9ee1c941e38291012df640443686b78732e95ac5f Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.446627 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"a32af2399b775ce1308942e9ee1c941e38291012df640443686b78732e95ac5f"} Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.449430 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9bb9aa52-0895-418e-8e0b-d922948e85a7","Type":"ContainerStarted","Data":"489037191e7d74a2730eac1c46abc09d34fce2781e436638fcd47291281cfd30"} Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.449667 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.451820 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caae7093-b594-47fb-b863-38d825f0048d","Type":"ContainerStarted","Data":"ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2"} Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.452030 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.453692 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-skk7h" event={"ID":"2f759e91-6dab-4432-9431-ce312918c7e7","Type":"ContainerStarted","Data":"eaf7230cafb909a1fed57ee77b4de32797dc9fca4fefd4c6cbb0781439b57e03"} Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.455550 4890 generic.go:334] "Generic (PLEG): container finished" podID="b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd" containerID="d9f68bc7764f75ac9e2f265b029157c523efe523b7ad9fb1d218658e82fd95fe" exitCode=0 Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.455619 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-vj6jz" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.455615 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f747l" event={"ID":"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd","Type":"ContainerDied","Data":"d9f68bc7764f75ac9e2f265b029157c523efe523b7ad9fb1d218658e82fd95fe"} Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.455705 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f747l" event={"ID":"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd","Type":"ContainerStarted","Data":"28e60a6c9462b4bab888107e1fadfe40c2762c9ae332db44f15ea998fe332362"} Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.486119 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.705100387 podStartE2EDuration="1m21.48609403s" podCreationTimestamp="2026-01-21 15:51:00 +0000 UTC" firstStartedPulling="2026-01-21 15:51:02.846495042 +0000 UTC m=+1145.207937461" lastFinishedPulling="2026-01-21 15:51:43.627488695 +0000 UTC m=+1185.988931104" observedRunningTime="2026-01-21 15:52:21.479003774 +0000 UTC m=+1223.840446183" watchObservedRunningTime="2026-01-21 15:52:21.48609403 +0000 UTC m=+1223.847536439" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.510154 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-skk7h" podStartSLOduration=2.758278286 podStartE2EDuration="16.510136286s" podCreationTimestamp="2026-01-21 15:52:05 +0000 UTC" firstStartedPulling="2026-01-21 15:52:06.822784808 +0000 UTC m=+1209.184227217" lastFinishedPulling="2026-01-21 15:52:20.574642808 +0000 UTC m=+1222.936085217" observedRunningTime="2026-01-21 15:52:21.504805264 +0000 UTC m=+1223.866247673" watchObservedRunningTime="2026-01-21 15:52:21.510136286 +0000 UTC m=+1223.871578695" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.548878 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.47942496 podStartE2EDuration="1m21.548861496s" podCreationTimestamp="2026-01-21 15:51:00 +0000 UTC" firstStartedPulling="2026-01-21 15:51:02.55808518 +0000 UTC m=+1144.919527589" lastFinishedPulling="2026-01-21 15:51:43.627521726 +0000 UTC m=+1185.988964125" observedRunningTime="2026-01-21 15:52:21.543010251 +0000 UTC m=+1223.904452660" watchObservedRunningTime="2026-01-21 15:52:21.548861496 +0000 UTC m=+1223.910303905" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.629138 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pmrch-config-vj6jz"] Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.637636 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pmrch-config-vj6jz"] Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.741841 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pmrch-config-jlhvq"] Jan 21 15:52:21 crc kubenswrapper[4890]: E0121 15:52:21.742181 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202085c6-8a25-4583-8792-7da263376f95" containerName="ovn-config" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.742194 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="202085c6-8a25-4583-8792-7da263376f95" containerName="ovn-config" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.742344 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="202085c6-8a25-4583-8792-7da263376f95" containerName="ovn-config" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.742849 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.745115 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.757394 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pmrch-config-jlhvq"] Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.788718 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gzk6\" (UniqueName: \"kubernetes.io/projected/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-kube-api-access-4gzk6\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.788779 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.788888 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-scripts\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.788926 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-log-ovn\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.788966 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-additional-scripts\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.789052 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run-ovn\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890506 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run-ovn\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890583 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gzk6\" (UniqueName: \"kubernetes.io/projected/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-kube-api-access-4gzk6\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890629 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890709 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-scripts\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890735 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-log-ovn\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890762 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-additional-scripts\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890872 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run-ovn\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890900 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-log-ovn\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.890955 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.891468 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-additional-scripts\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.893526 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-scripts\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.910403 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gzk6\" (UniqueName: \"kubernetes.io/projected/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-kube-api-access-4gzk6\") pod \"ovn-controller-pmrch-config-jlhvq\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:21 crc kubenswrapper[4890]: I0121 15:52:21.929654 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202085c6-8a25-4583-8792-7da263376f95" path="/var/lib/kubelet/pods/202085c6-8a25-4583-8792-7da263376f95/volumes" Jan 21 15:52:22 crc kubenswrapper[4890]: I0121 15:52:22.062258 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:22 crc kubenswrapper[4890]: W0121 15:52:22.373136 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b50adaa_3ffd_4fd1_9151_5bad07fd2243.slice/crio-f26e1c23b97d8d7dc90020860c4a450f9fdb226268b974bbd37775dc55ff22b8 WatchSource:0}: Error finding container f26e1c23b97d8d7dc90020860c4a450f9fdb226268b974bbd37775dc55ff22b8: Status 404 returned error can't find the container with id f26e1c23b97d8d7dc90020860c4a450f9fdb226268b974bbd37775dc55ff22b8 Jan 21 15:52:22 crc kubenswrapper[4890]: I0121 15:52:22.374713 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pmrch-config-jlhvq"] Jan 21 15:52:22 crc kubenswrapper[4890]: I0121 15:52:22.471623 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch-config-jlhvq" event={"ID":"0b50adaa-3ffd-4fd1-9151-5bad07fd2243","Type":"ContainerStarted","Data":"f26e1c23b97d8d7dc90020860c4a450f9fdb226268b974bbd37775dc55ff22b8"} Jan 21 15:52:22 crc kubenswrapper[4890]: I0121 15:52:22.894501 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f747l" Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.016150 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xkpf\" (UniqueName: \"kubernetes.io/projected/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-kube-api-access-5xkpf\") pod \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.016209 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-operator-scripts\") pod \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\" (UID: \"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd\") " Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.018688 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd" (UID: "b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.023369 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-kube-api-access-5xkpf" (OuterVolumeSpecName: "kube-api-access-5xkpf") pod "b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd" (UID: "b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd"). InnerVolumeSpecName "kube-api-access-5xkpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.118091 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xkpf\" (UniqueName: \"kubernetes.io/projected/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-kube-api-access-5xkpf\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.118123 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.481796 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"b12bd693bb7580997fa08c163b6c91d65afd3c016d9dbb69b3a75a78a8a917e1"} Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.482064 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"ec758b8a6824700021b92bcf01c6881e87a7af7bbc0acf6895ec0b0549188a0c"} Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.482077 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"044efc2d7955bb08fe4ff237c3a7e4e25d9ab4e72fa5d3faa7c58ac27561b350"} Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.492621 4890 generic.go:334] "Generic (PLEG): container finished" podID="0b50adaa-3ffd-4fd1-9151-5bad07fd2243" containerID="bff60feda0b6764dfca45482723eee1a185f385fe89ec7e4db2d7b58ad76e34b" exitCode=0 Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.492713 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch-config-jlhvq" event={"ID":"0b50adaa-3ffd-4fd1-9151-5bad07fd2243","Type":"ContainerDied","Data":"bff60feda0b6764dfca45482723eee1a185f385fe89ec7e4db2d7b58ad76e34b"} Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.497586 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f747l" event={"ID":"b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd","Type":"ContainerDied","Data":"28e60a6c9462b4bab888107e1fadfe40c2762c9ae332db44f15ea998fe332362"} Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.497618 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f747l" Jan 21 15:52:23 crc kubenswrapper[4890]: I0121 15:52:23.497622 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e60a6c9462b4bab888107e1fadfe40c2762c9ae332db44f15ea998fe332362" Jan 21 15:52:24 crc kubenswrapper[4890]: I0121 15:52:24.516922 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"56a854520d26c749a116af4b530898a508240c3791da8d8b127790fb93dfdcc0"} Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.030658 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.166028 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gzk6\" (UniqueName: \"kubernetes.io/projected/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-kube-api-access-4gzk6\") pod \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.166159 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-additional-scripts\") pod \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.166195 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run\") pod \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.166311 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-scripts\") pod \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.166471 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-log-ovn\") pod \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.166515 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run-ovn\") pod \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\" (UID: \"0b50adaa-3ffd-4fd1-9151-5bad07fd2243\") " Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.167061 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0b50adaa-3ffd-4fd1-9151-5bad07fd2243" (UID: "0b50adaa-3ffd-4fd1-9151-5bad07fd2243"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.167922 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run" (OuterVolumeSpecName: "var-run") pod "0b50adaa-3ffd-4fd1-9151-5bad07fd2243" (UID: "0b50adaa-3ffd-4fd1-9151-5bad07fd2243"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.168177 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0b50adaa-3ffd-4fd1-9151-5bad07fd2243" (UID: "0b50adaa-3ffd-4fd1-9151-5bad07fd2243"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.168644 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0b50adaa-3ffd-4fd1-9151-5bad07fd2243" (UID: "0b50adaa-3ffd-4fd1-9151-5bad07fd2243"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.169150 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-scripts" (OuterVolumeSpecName: "scripts") pod "0b50adaa-3ffd-4fd1-9151-5bad07fd2243" (UID: "0b50adaa-3ffd-4fd1-9151-5bad07fd2243"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.184651 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-kube-api-access-4gzk6" (OuterVolumeSpecName: "kube-api-access-4gzk6") pod "0b50adaa-3ffd-4fd1-9151-5bad07fd2243" (UID: "0b50adaa-3ffd-4fd1-9151-5bad07fd2243"). InnerVolumeSpecName "kube-api-access-4gzk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.268688 4890 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.268735 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gzk6\" (UniqueName: \"kubernetes.io/projected/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-kube-api-access-4gzk6\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.268749 4890 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.268762 4890 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.268773 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.268782 4890 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b50adaa-3ffd-4fd1-9151-5bad07fd2243-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.526148 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch-config-jlhvq" event={"ID":"0b50adaa-3ffd-4fd1-9151-5bad07fd2243","Type":"ContainerDied","Data":"f26e1c23b97d8d7dc90020860c4a450f9fdb226268b974bbd37775dc55ff22b8"} Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.526199 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f26e1c23b97d8d7dc90020860c4a450f9fdb226268b974bbd37775dc55ff22b8" Jan 21 15:52:25 crc kubenswrapper[4890]: I0121 15:52:25.526264 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch-config-jlhvq" Jan 21 15:52:26 crc kubenswrapper[4890]: I0121 15:52:26.116500 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pmrch-config-jlhvq"] Jan 21 15:52:26 crc kubenswrapper[4890]: I0121 15:52:26.126986 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pmrch-config-jlhvq"] Jan 21 15:52:26 crc kubenswrapper[4890]: I0121 15:52:26.539042 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"520ea43d4d0b04096ca36e892322861f691a6670e78931f59f2ea9d885179af5"} Jan 21 15:52:26 crc kubenswrapper[4890]: I0121 15:52:26.539095 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"ae1658689b220e377c8fba9958351f538aaba5502635f74cadc260a696a44a6f"} Jan 21 15:52:26 crc kubenswrapper[4890]: I0121 15:52:26.539110 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"5fa5e2d9ca2571b7361e659ef85544eb30c548cf9527ac1a3be6a7a829e8fbee"} Jan 21 15:52:27 crc kubenswrapper[4890]: I0121 15:52:27.550001 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"8616884f18e315e3258c25763c5c8cdaea184dc25ba69e7d8e0fa91ac49eaa89"} Jan 21 15:52:27 crc kubenswrapper[4890]: I0121 15:52:27.926408 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b50adaa-3ffd-4fd1-9151-5bad07fd2243" path="/var/lib/kubelet/pods/0b50adaa-3ffd-4fd1-9151-5bad07fd2243/volumes" Jan 21 15:52:28 crc kubenswrapper[4890]: I0121 15:52:28.578726 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"15ae8d44e4e537260de3b6431b223bf85ce1e10d4762ac9a192b7a7606fb94e3"} Jan 21 15:52:29 crc kubenswrapper[4890]: I0121 15:52:29.591264 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"22335d0f4d49f32620ca48289dd4eb408b7f064e87d7877cc89abf517378da85"} Jan 21 15:52:29 crc kubenswrapper[4890]: I0121 15:52:29.591314 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"1df25c4313e8f39ad26d3ec8a848f850a004e7acdea809912d27022424ac0fec"} Jan 21 15:52:29 crc kubenswrapper[4890]: I0121 15:52:29.591330 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"291b43ebb5749379f57dbecf17da84aa48983e3db96591d9b7e0aa8d76cc1621"} Jan 21 15:52:29 crc kubenswrapper[4890]: I0121 15:52:29.591342 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"7c5460ff3a431a21df2a718e89dbf2a5a523b0ee5fdfadf49395a1b74d24c6ab"} Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.616309 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"d353b883ad9d704cf38a51820b942338cdd8c742501c227a8140207f662015e8"} Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.616640 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerStarted","Data":"02a34f2bdfeb043480bedf1700ad25535feb47fbbf2cc661cbb62aad70e40a3b"} Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.658329 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.618656599 podStartE2EDuration="44.658311645s" podCreationTimestamp="2026-01-21 15:51:47 +0000 UTC" firstStartedPulling="2026-01-21 15:52:21.301336108 +0000 UTC m=+1223.662778537" lastFinishedPulling="2026-01-21 15:52:28.340991174 +0000 UTC m=+1230.702433583" observedRunningTime="2026-01-21 15:52:31.650790618 +0000 UTC m=+1234.012233027" watchObservedRunningTime="2026-01-21 15:52:31.658311645 +0000 UTC m=+1234.019754054" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.942782 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-spvst"] Jan 21 15:52:31 crc kubenswrapper[4890]: E0121 15:52:31.943105 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd" containerName="mariadb-account-create-update" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.943121 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd" containerName="mariadb-account-create-update" Jan 21 15:52:31 crc kubenswrapper[4890]: E0121 15:52:31.943161 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b50adaa-3ffd-4fd1-9151-5bad07fd2243" containerName="ovn-config" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.943169 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b50adaa-3ffd-4fd1-9151-5bad07fd2243" containerName="ovn-config" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.943394 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd" containerName="mariadb-account-create-update" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.943415 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b50adaa-3ffd-4fd1-9151-5bad07fd2243" containerName="ovn-config" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.944448 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-spvst"] Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.944567 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.948992 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 15:52:31 crc kubenswrapper[4890]: I0121 15:52:31.976484 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.114130 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.114203 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-config\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.114228 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.114265 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.114457 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lp8\" (UniqueName: \"kubernetes.io/projected/04661de4-967f-4cc0-a4a3-ced72441fda3-kube-api-access-p5lp8\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.114507 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.215885 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lp8\" (UniqueName: \"kubernetes.io/projected/04661de4-967f-4cc0-a4a3-ced72441fda3-kube-api-access-p5lp8\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.215954 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.216011 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.216051 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-config\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.216076 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.216114 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.217141 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.217139 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.217270 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-config\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.217335 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.217774 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.262967 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lp8\" (UniqueName: \"kubernetes.io/projected/04661de4-967f-4cc0-a4a3-ced72441fda3-kube-api-access-p5lp8\") pod \"dnsmasq-dns-8467b54bcc-spvst\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.269292 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.384595 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.414139 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-lwpq6"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.415397 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.441960 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lwpq6"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.468146 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4691-account-create-update-pvhx9"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.469437 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.490331 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.527500 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/013a38e6-319d-4fd9-bba3-a05b6c10acd9-operator-scripts\") pod \"cinder-db-create-lwpq6\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.559467 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8cvf\" (UniqueName: \"kubernetes.io/projected/013a38e6-319d-4fd9-bba3-a05b6c10acd9-kube-api-access-g8cvf\") pod \"cinder-db-create-lwpq6\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.592537 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4691-account-create-update-pvhx9"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.619972 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-lb47b"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.621703 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.629198 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lb47b"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.632685 4890 generic.go:334] "Generic (PLEG): container finished" podID="2f759e91-6dab-4432-9431-ce312918c7e7" containerID="eaf7230cafb909a1fed57ee77b4de32797dc9fca4fefd4c6cbb0781439b57e03" exitCode=0 Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.635297 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-skk7h" event={"ID":"2f759e91-6dab-4432-9431-ce312918c7e7","Type":"ContainerDied","Data":"eaf7230cafb909a1fed57ee77b4de32797dc9fca4fefd4c6cbb0781439b57e03"} Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.654837 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e4f9-account-create-update-x5k9s"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.655865 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.660185 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.662044 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/013a38e6-319d-4fd9-bba3-a05b6c10acd9-operator-scripts\") pod \"cinder-db-create-lwpq6\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.662080 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8cvf\" (UniqueName: \"kubernetes.io/projected/013a38e6-319d-4fd9-bba3-a05b6c10acd9-kube-api-access-g8cvf\") pod \"cinder-db-create-lwpq6\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.662105 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700a77fe-9836-4979-8c95-7054c3d8d42a-operator-scripts\") pod \"barbican-4691-account-create-update-pvhx9\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.662130 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5z5\" (UniqueName: \"kubernetes.io/projected/700a77fe-9836-4979-8c95-7054c3d8d42a-kube-api-access-wf5z5\") pod \"barbican-4691-account-create-update-pvhx9\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.663668 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/013a38e6-319d-4fd9-bba3-a05b6c10acd9-operator-scripts\") pod \"cinder-db-create-lwpq6\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.677945 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-n8mlq"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.679855 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.689206 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.689619 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l9jxh" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.693991 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.694175 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.694275 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e4f9-account-create-update-x5k9s"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.703546 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8cvf\" (UniqueName: \"kubernetes.io/projected/013a38e6-319d-4fd9-bba3-a05b6c10acd9-kube-api-access-g8cvf\") pod \"cinder-db-create-lwpq6\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.719072 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-n8mlq"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.732460 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.763743 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-config-data\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.763832 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a47ebe-8900-4913-b7a1-9988e32cc5dc-operator-scripts\") pod \"barbican-db-create-lb47b\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.763885 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-combined-ca-bundle\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.763912 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700a77fe-9836-4979-8c95-7054c3d8d42a-operator-scripts\") pod \"barbican-4691-account-create-update-pvhx9\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.763953 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf5z5\" (UniqueName: \"kubernetes.io/projected/700a77fe-9836-4979-8c95-7054c3d8d42a-kube-api-access-wf5z5\") pod \"barbican-4691-account-create-update-pvhx9\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.763988 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a501566d-03dd-40b1-bda9-8c6173d9292f-operator-scripts\") pod \"cinder-e4f9-account-create-update-x5k9s\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.764006 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82m2p\" (UniqueName: \"kubernetes.io/projected/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-kube-api-access-82m2p\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.764024 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfvsc\" (UniqueName: \"kubernetes.io/projected/a501566d-03dd-40b1-bda9-8c6173d9292f-kube-api-access-bfvsc\") pod \"cinder-e4f9-account-create-update-x5k9s\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.764053 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dpx\" (UniqueName: \"kubernetes.io/projected/55a47ebe-8900-4913-b7a1-9988e32cc5dc-kube-api-access-62dpx\") pod \"barbican-db-create-lb47b\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.766610 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700a77fe-9836-4979-8c95-7054c3d8d42a-operator-scripts\") pod \"barbican-4691-account-create-update-pvhx9\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.861495 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf5z5\" (UniqueName: \"kubernetes.io/projected/700a77fe-9836-4979-8c95-7054c3d8d42a-kube-api-access-wf5z5\") pod \"barbican-4691-account-create-update-pvhx9\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.865922 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-config-data\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.866014 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a47ebe-8900-4913-b7a1-9988e32cc5dc-operator-scripts\") pod \"barbican-db-create-lb47b\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.866059 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-combined-ca-bundle\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.866105 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a501566d-03dd-40b1-bda9-8c6173d9292f-operator-scripts\") pod \"cinder-e4f9-account-create-update-x5k9s\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.866124 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82m2p\" (UniqueName: \"kubernetes.io/projected/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-kube-api-access-82m2p\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.866145 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfvsc\" (UniqueName: \"kubernetes.io/projected/a501566d-03dd-40b1-bda9-8c6173d9292f-kube-api-access-bfvsc\") pod \"cinder-e4f9-account-create-update-x5k9s\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.866175 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62dpx\" (UniqueName: \"kubernetes.io/projected/55a47ebe-8900-4913-b7a1-9988e32cc5dc-kube-api-access-62dpx\") pod \"barbican-db-create-lb47b\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.867861 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-j254l"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.867860 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a47ebe-8900-4913-b7a1-9988e32cc5dc-operator-scripts\") pod \"barbican-db-create-lb47b\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.868594 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a501566d-03dd-40b1-bda9-8c6173d9292f-operator-scripts\") pod \"cinder-e4f9-account-create-update-x5k9s\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.878311 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j254l" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.880141 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-combined-ca-bundle\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.886463 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-config-data\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.895554 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j254l"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.901984 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dpx\" (UniqueName: \"kubernetes.io/projected/55a47ebe-8900-4913-b7a1-9988e32cc5dc-kube-api-access-62dpx\") pod \"barbican-db-create-lb47b\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.927060 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfvsc\" (UniqueName: \"kubernetes.io/projected/a501566d-03dd-40b1-bda9-8c6173d9292f-kube-api-access-bfvsc\") pod \"cinder-e4f9-account-create-update-x5k9s\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.930384 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82m2p\" (UniqueName: \"kubernetes.io/projected/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-kube-api-access-82m2p\") pod \"keystone-db-sync-n8mlq\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.968860 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pskc\" (UniqueName: \"kubernetes.io/projected/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-kube-api-access-5pskc\") pod \"neutron-db-create-j254l\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " pod="openstack/neutron-db-create-j254l" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.969020 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-operator-scripts\") pod \"neutron-db-create-j254l\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " pod="openstack/neutron-db-create-j254l" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.980451 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.997455 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-de17-account-create-update-jrxd2"] Jan 21 15:52:32 crc kubenswrapper[4890]: I0121 15:52:32.998682 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.002725 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.003388 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.011130 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-de17-account-create-update-jrxd2"] Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.069890 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-spvst"] Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.070365 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-operator-scripts\") pod \"neutron-db-create-j254l\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " pod="openstack/neutron-db-create-j254l" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.070441 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmtv\" (UniqueName: \"kubernetes.io/projected/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-kube-api-access-thmtv\") pod \"neutron-de17-account-create-update-jrxd2\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.070481 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pskc\" (UniqueName: \"kubernetes.io/projected/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-kube-api-access-5pskc\") pod \"neutron-db-create-j254l\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " pod="openstack/neutron-db-create-j254l" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.070549 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-operator-scripts\") pod \"neutron-de17-account-create-update-jrxd2\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.071283 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-operator-scripts\") pod \"neutron-db-create-j254l\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " pod="openstack/neutron-db-create-j254l" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.112030 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pskc\" (UniqueName: \"kubernetes.io/projected/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-kube-api-access-5pskc\") pod \"neutron-db-create-j254l\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " pod="openstack/neutron-db-create-j254l" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.137589 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.162007 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.176321 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-operator-scripts\") pod \"neutron-de17-account-create-update-jrxd2\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.176444 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmtv\" (UniqueName: \"kubernetes.io/projected/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-kube-api-access-thmtv\") pod \"neutron-de17-account-create-update-jrxd2\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.177597 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-operator-scripts\") pod \"neutron-de17-account-create-update-jrxd2\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.214315 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j254l" Jan 21 15:52:33 crc kubenswrapper[4890]: I0121 15:52:33.222087 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmtv\" (UniqueName: \"kubernetes.io/projected/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-kube-api-access-thmtv\") pod \"neutron-de17-account-create-update-jrxd2\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:33.357047 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:33.367045 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lwpq6"] Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:33.655571 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lwpq6" event={"ID":"013a38e6-319d-4fd9-bba3-a05b6c10acd9","Type":"ContainerStarted","Data":"f09d6b77117291a2d126047657be23ec584755d86213f73822913f3e102779cd"} Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:33.657613 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" event={"ID":"04661de4-967f-4cc0-a4a3-ced72441fda3","Type":"ContainerStarted","Data":"6fc13fb01e51715e93d3a03bb490a357fbf3677dff523b434fa3e95303e77e35"} Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.410990 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.469875 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfdvh\" (UniqueName: \"kubernetes.io/projected/2f759e91-6dab-4432-9431-ce312918c7e7-kube-api-access-xfdvh\") pod \"2f759e91-6dab-4432-9431-ce312918c7e7\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.470079 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-config-data\") pod \"2f759e91-6dab-4432-9431-ce312918c7e7\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.470228 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-db-sync-config-data\") pod \"2f759e91-6dab-4432-9431-ce312918c7e7\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.470267 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-combined-ca-bundle\") pod \"2f759e91-6dab-4432-9431-ce312918c7e7\" (UID: \"2f759e91-6dab-4432-9431-ce312918c7e7\") " Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.475672 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f759e91-6dab-4432-9431-ce312918c7e7-kube-api-access-xfdvh" (OuterVolumeSpecName: "kube-api-access-xfdvh") pod "2f759e91-6dab-4432-9431-ce312918c7e7" (UID: "2f759e91-6dab-4432-9431-ce312918c7e7"). InnerVolumeSpecName "kube-api-access-xfdvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.476228 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2f759e91-6dab-4432-9431-ce312918c7e7" (UID: "2f759e91-6dab-4432-9431-ce312918c7e7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.499864 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f759e91-6dab-4432-9431-ce312918c7e7" (UID: "2f759e91-6dab-4432-9431-ce312918c7e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.528121 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-config-data" (OuterVolumeSpecName: "config-data") pod "2f759e91-6dab-4432-9431-ce312918c7e7" (UID: "2f759e91-6dab-4432-9431-ce312918c7e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.572101 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.572136 4890 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.572150 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f759e91-6dab-4432-9431-ce312918c7e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.572165 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfdvh\" (UniqueName: \"kubernetes.io/projected/2f759e91-6dab-4432-9431-ce312918c7e7-kube-api-access-xfdvh\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.692362 4890 generic.go:334] "Generic (PLEG): container finished" podID="013a38e6-319d-4fd9-bba3-a05b6c10acd9" containerID="8288419be0984195f05e824990ff0b35010407e8281682f9c40f8f53106793d8" exitCode=0 Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.692437 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lwpq6" event={"ID":"013a38e6-319d-4fd9-bba3-a05b6c10acd9","Type":"ContainerDied","Data":"8288419be0984195f05e824990ff0b35010407e8281682f9c40f8f53106793d8"} Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.694032 4890 generic.go:334] "Generic (PLEG): container finished" podID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerID="e84e0b9cadb72aa045b4212f18cd600d7adb9dcaea44a62678fa91391009c26e" exitCode=0 Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.694106 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" event={"ID":"04661de4-967f-4cc0-a4a3-ced72441fda3","Type":"ContainerDied","Data":"e84e0b9cadb72aa045b4212f18cd600d7adb9dcaea44a62678fa91391009c26e"} Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.698321 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-skk7h" event={"ID":"2f759e91-6dab-4432-9431-ce312918c7e7","Type":"ContainerDied","Data":"633e01db38c452fde46acd1d5ba44b00225546bfdd35ff0b1f39e708b02f0213"} Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.698369 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633e01db38c452fde46acd1d5ba44b00225546bfdd35ff0b1f39e708b02f0213" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.698421 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-skk7h" Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.778445 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4691-account-create-update-pvhx9"] Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.789193 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j254l"] Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.795593 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lb47b"] Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.805685 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-n8mlq"] Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.810810 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e4f9-account-create-update-x5k9s"] Jan 21 15:52:37 crc kubenswrapper[4890]: I0121 15:52:37.818236 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-de17-account-create-update-jrxd2"] Jan 21 15:52:37 crc kubenswrapper[4890]: W0121 15:52:37.828486 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd08ecdf9_e34c_476c_99d6_f2e7db2b5129.slice/crio-1ae66f9f36865765546a7d9f2d8e9937b20e14d413250c426f214f1513cac175 WatchSource:0}: Error finding container 1ae66f9f36865765546a7d9f2d8e9937b20e14d413250c426f214f1513cac175: Status 404 returned error can't find the container with id 1ae66f9f36865765546a7d9f2d8e9937b20e14d413250c426f214f1513cac175 Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.718320 4890 generic.go:334] "Generic (PLEG): container finished" podID="a501566d-03dd-40b1-bda9-8c6173d9292f" containerID="05401006a7a2ad8b1aa8b60e1077170d3fc35d0dc28d5fca2fa129ee967173b5" exitCode=0 Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.718459 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e4f9-account-create-update-x5k9s" event={"ID":"a501566d-03dd-40b1-bda9-8c6173d9292f","Type":"ContainerDied","Data":"05401006a7a2ad8b1aa8b60e1077170d3fc35d0dc28d5fca2fa129ee967173b5"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.719110 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e4f9-account-create-update-x5k9s" event={"ID":"a501566d-03dd-40b1-bda9-8c6173d9292f","Type":"ContainerStarted","Data":"9fa67fa46975367a435e8b4b96640eb87916718852a59480cc6663486b093e51"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.724567 4890 generic.go:334] "Generic (PLEG): container finished" podID="55a47ebe-8900-4913-b7a1-9988e32cc5dc" containerID="0d51c4e84ed9dc0d609610989bd5d3017dd5dceb256f192563ab877302776690" exitCode=0 Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.724698 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lb47b" event={"ID":"55a47ebe-8900-4913-b7a1-9988e32cc5dc","Type":"ContainerDied","Data":"0d51c4e84ed9dc0d609610989bd5d3017dd5dceb256f192563ab877302776690"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.724727 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lb47b" event={"ID":"55a47ebe-8900-4913-b7a1-9988e32cc5dc","Type":"ContainerStarted","Data":"1d5f65263ae9c28dc1de23989df4804842ea0684937bce630b5c6d1bcdc49888"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.733228 4890 generic.go:334] "Generic (PLEG): container finished" podID="700a77fe-9836-4979-8c95-7054c3d8d42a" containerID="e1a79afa342facaff6b7dba587375b37a5b33902571feffefe046acbe30019ad" exitCode=0 Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.733300 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4691-account-create-update-pvhx9" event={"ID":"700a77fe-9836-4979-8c95-7054c3d8d42a","Type":"ContainerDied","Data":"e1a79afa342facaff6b7dba587375b37a5b33902571feffefe046acbe30019ad"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.733325 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4691-account-create-update-pvhx9" event={"ID":"700a77fe-9836-4979-8c95-7054c3d8d42a","Type":"ContainerStarted","Data":"8e3ab3dd1c661fcb435b3f03b4efc338b020e313dd8f0f61d29c171f72fcdf9b"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.736305 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n8mlq" event={"ID":"ef076f7d-7b53-4a05-8208-8dfa2ee2d415","Type":"ContainerStarted","Data":"7552c39bc54171c450363d5190c4ce14f2442d912c1258a2fda6ca924e4d7d85"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.744566 4890 generic.go:334] "Generic (PLEG): container finished" podID="8f9ab0a2-4598-4893-bf8b-c216f4f4b692" containerID="8a600df015f1f8de1958bab6bd14cbbafa4e20cc6adeaa682684122d5cd783ab" exitCode=0 Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.744666 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j254l" event={"ID":"8f9ab0a2-4598-4893-bf8b-c216f4f4b692","Type":"ContainerDied","Data":"8a600df015f1f8de1958bab6bd14cbbafa4e20cc6adeaa682684122d5cd783ab"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.744697 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j254l" event={"ID":"8f9ab0a2-4598-4893-bf8b-c216f4f4b692","Type":"ContainerStarted","Data":"ca10b2070602e50e87a8fcff416910880e9d8583d446af5166db6b2b95aaf420"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.755264 4890 generic.go:334] "Generic (PLEG): container finished" podID="d08ecdf9-e34c-476c-99d6-f2e7db2b5129" containerID="de33dfbf0dbdc2111900a2a70b6bed546d48c92feb4b8907baecfaea0ab70977" exitCode=0 Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.755340 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-de17-account-create-update-jrxd2" event={"ID":"d08ecdf9-e34c-476c-99d6-f2e7db2b5129","Type":"ContainerDied","Data":"de33dfbf0dbdc2111900a2a70b6bed546d48c92feb4b8907baecfaea0ab70977"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.755501 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-de17-account-create-update-jrxd2" event={"ID":"d08ecdf9-e34c-476c-99d6-f2e7db2b5129","Type":"ContainerStarted","Data":"1ae66f9f36865765546a7d9f2d8e9937b20e14d413250c426f214f1513cac175"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.761877 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" event={"ID":"04661de4-967f-4cc0-a4a3-ced72441fda3","Type":"ContainerStarted","Data":"8e2adafaf8744fb5b74a302119c09a9e39b7faee19d77ad021bf6516bd2ff761"} Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.869172 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" podStartSLOduration=7.869151175 podStartE2EDuration="7.869151175s" podCreationTimestamp="2026-01-21 15:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:38.836112996 +0000 UTC m=+1241.197555405" watchObservedRunningTime="2026-01-21 15:52:38.869151175 +0000 UTC m=+1241.230593584" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.870499 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-spvst"] Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.887131 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-skmdf"] Jan 21 15:52:38 crc kubenswrapper[4890]: E0121 15:52:38.887641 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f759e91-6dab-4432-9431-ce312918c7e7" containerName="glance-db-sync" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.887665 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f759e91-6dab-4432-9431-ce312918c7e7" containerName="glance-db-sync" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.887857 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f759e91-6dab-4432-9431-ce312918c7e7" containerName="glance-db-sync" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.888898 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.903193 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-skmdf"] Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.995446 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.995563 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-config\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.995645 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.995812 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp5jk\" (UniqueName: \"kubernetes.io/projected/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-kube-api-access-kp5jk\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.995896 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:38 crc kubenswrapper[4890]: I0121 15:52:38.996222 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.097934 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.098018 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.098059 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-config\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.098094 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.098128 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp5jk\" (UniqueName: \"kubernetes.io/projected/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-kube-api-access-kp5jk\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.098156 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.099058 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.099140 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.099643 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.099742 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-config\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.100200 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.123957 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp5jk\" (UniqueName: \"kubernetes.io/projected/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-kube-api-access-kp5jk\") pod \"dnsmasq-dns-56c9bc6f5c-skmdf\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.226318 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.232238 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.301671 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8cvf\" (UniqueName: \"kubernetes.io/projected/013a38e6-319d-4fd9-bba3-a05b6c10acd9-kube-api-access-g8cvf\") pod \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.301788 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/013a38e6-319d-4fd9-bba3-a05b6c10acd9-operator-scripts\") pod \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\" (UID: \"013a38e6-319d-4fd9-bba3-a05b6c10acd9\") " Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.303013 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013a38e6-319d-4fd9-bba3-a05b6c10acd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "013a38e6-319d-4fd9-bba3-a05b6c10acd9" (UID: "013a38e6-319d-4fd9-bba3-a05b6c10acd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.307928 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013a38e6-319d-4fd9-bba3-a05b6c10acd9-kube-api-access-g8cvf" (OuterVolumeSpecName: "kube-api-access-g8cvf") pod "013a38e6-319d-4fd9-bba3-a05b6c10acd9" (UID: "013a38e6-319d-4fd9-bba3-a05b6c10acd9"). InnerVolumeSpecName "kube-api-access-g8cvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.405475 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8cvf\" (UniqueName: \"kubernetes.io/projected/013a38e6-319d-4fd9-bba3-a05b6c10acd9-kube-api-access-g8cvf\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.405542 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/013a38e6-319d-4fd9-bba3-a05b6c10acd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.741375 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-skmdf"] Jan 21 15:52:39 crc kubenswrapper[4890]: W0121 15:52:39.752975 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50b7aaa8_30dd_4230_8d74_a20cbe7f10d9.slice/crio-2f5eecdc5d93383b5c51f9c3b20de4955cee35d44ff0fd287fea91c59312762e WatchSource:0}: Error finding container 2f5eecdc5d93383b5c51f9c3b20de4955cee35d44ff0fd287fea91c59312762e: Status 404 returned error can't find the container with id 2f5eecdc5d93383b5c51f9c3b20de4955cee35d44ff0fd287fea91c59312762e Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.774891 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lwpq6" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.774929 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lwpq6" event={"ID":"013a38e6-319d-4fd9-bba3-a05b6c10acd9","Type":"ContainerDied","Data":"f09d6b77117291a2d126047657be23ec584755d86213f73822913f3e102779cd"} Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.774968 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f09d6b77117291a2d126047657be23ec584755d86213f73822913f3e102779cd" Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.780164 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" event={"ID":"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9","Type":"ContainerStarted","Data":"2f5eecdc5d93383b5c51f9c3b20de4955cee35d44ff0fd287fea91c59312762e"} Jan 21 15:52:39 crc kubenswrapper[4890]: I0121 15:52:39.780545 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.151748 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.238219 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62dpx\" (UniqueName: \"kubernetes.io/projected/55a47ebe-8900-4913-b7a1-9988e32cc5dc-kube-api-access-62dpx\") pod \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.238518 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a47ebe-8900-4913-b7a1-9988e32cc5dc-operator-scripts\") pod \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\" (UID: \"55a47ebe-8900-4913-b7a1-9988e32cc5dc\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.239179 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a47ebe-8900-4913-b7a1-9988e32cc5dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55a47ebe-8900-4913-b7a1-9988e32cc5dc" (UID: "55a47ebe-8900-4913-b7a1-9988e32cc5dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.252585 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a47ebe-8900-4913-b7a1-9988e32cc5dc-kube-api-access-62dpx" (OuterVolumeSpecName: "kube-api-access-62dpx") pod "55a47ebe-8900-4913-b7a1-9988e32cc5dc" (UID: "55a47ebe-8900-4913-b7a1-9988e32cc5dc"). InnerVolumeSpecName "kube-api-access-62dpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.341702 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a47ebe-8900-4913-b7a1-9988e32cc5dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.341976 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62dpx\" (UniqueName: \"kubernetes.io/projected/55a47ebe-8900-4913-b7a1-9988e32cc5dc-kube-api-access-62dpx\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.490787 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.496647 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.508161 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j254l" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.519605 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.648076 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfvsc\" (UniqueName: \"kubernetes.io/projected/a501566d-03dd-40b1-bda9-8c6173d9292f-kube-api-access-bfvsc\") pod \"a501566d-03dd-40b1-bda9-8c6173d9292f\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.649428 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pskc\" (UniqueName: \"kubernetes.io/projected/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-kube-api-access-5pskc\") pod \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.649463 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thmtv\" (UniqueName: \"kubernetes.io/projected/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-kube-api-access-thmtv\") pod \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.649495 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf5z5\" (UniqueName: \"kubernetes.io/projected/700a77fe-9836-4979-8c95-7054c3d8d42a-kube-api-access-wf5z5\") pod \"700a77fe-9836-4979-8c95-7054c3d8d42a\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.649518 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-operator-scripts\") pod \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\" (UID: \"d08ecdf9-e34c-476c-99d6-f2e7db2b5129\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.649813 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700a77fe-9836-4979-8c95-7054c3d8d42a-operator-scripts\") pod \"700a77fe-9836-4979-8c95-7054c3d8d42a\" (UID: \"700a77fe-9836-4979-8c95-7054c3d8d42a\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.649860 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-operator-scripts\") pod \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\" (UID: \"8f9ab0a2-4598-4893-bf8b-c216f4f4b692\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650019 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a501566d-03dd-40b1-bda9-8c6173d9292f-operator-scripts\") pod \"a501566d-03dd-40b1-bda9-8c6173d9292f\" (UID: \"a501566d-03dd-40b1-bda9-8c6173d9292f\") " Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650136 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d08ecdf9-e34c-476c-99d6-f2e7db2b5129" (UID: "d08ecdf9-e34c-476c-99d6-f2e7db2b5129"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650449 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f9ab0a2-4598-4893-bf8b-c216f4f4b692" (UID: "8f9ab0a2-4598-4893-bf8b-c216f4f4b692"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650448 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700a77fe-9836-4979-8c95-7054c3d8d42a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "700a77fe-9836-4979-8c95-7054c3d8d42a" (UID: "700a77fe-9836-4979-8c95-7054c3d8d42a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650539 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650555 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700a77fe-9836-4979-8c95-7054c3d8d42a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650564 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.650719 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a501566d-03dd-40b1-bda9-8c6173d9292f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a501566d-03dd-40b1-bda9-8c6173d9292f" (UID: "a501566d-03dd-40b1-bda9-8c6173d9292f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.653484 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700a77fe-9836-4979-8c95-7054c3d8d42a-kube-api-access-wf5z5" (OuterVolumeSpecName: "kube-api-access-wf5z5") pod "700a77fe-9836-4979-8c95-7054c3d8d42a" (UID: "700a77fe-9836-4979-8c95-7054c3d8d42a"). InnerVolumeSpecName "kube-api-access-wf5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.653540 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a501566d-03dd-40b1-bda9-8c6173d9292f-kube-api-access-bfvsc" (OuterVolumeSpecName: "kube-api-access-bfvsc") pod "a501566d-03dd-40b1-bda9-8c6173d9292f" (UID: "a501566d-03dd-40b1-bda9-8c6173d9292f"). InnerVolumeSpecName "kube-api-access-bfvsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.654229 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-kube-api-access-5pskc" (OuterVolumeSpecName: "kube-api-access-5pskc") pod "8f9ab0a2-4598-4893-bf8b-c216f4f4b692" (UID: "8f9ab0a2-4598-4893-bf8b-c216f4f4b692"). InnerVolumeSpecName "kube-api-access-5pskc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.654602 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-kube-api-access-thmtv" (OuterVolumeSpecName: "kube-api-access-thmtv") pod "d08ecdf9-e34c-476c-99d6-f2e7db2b5129" (UID: "d08ecdf9-e34c-476c-99d6-f2e7db2b5129"). InnerVolumeSpecName "kube-api-access-thmtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.752636 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a501566d-03dd-40b1-bda9-8c6173d9292f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.752686 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfvsc\" (UniqueName: \"kubernetes.io/projected/a501566d-03dd-40b1-bda9-8c6173d9292f-kube-api-access-bfvsc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.752703 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pskc\" (UniqueName: \"kubernetes.io/projected/8f9ab0a2-4598-4893-bf8b-c216f4f4b692-kube-api-access-5pskc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.752716 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thmtv\" (UniqueName: \"kubernetes.io/projected/d08ecdf9-e34c-476c-99d6-f2e7db2b5129-kube-api-access-thmtv\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.752729 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf5z5\" (UniqueName: \"kubernetes.io/projected/700a77fe-9836-4979-8c95-7054c3d8d42a-kube-api-access-wf5z5\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.790440 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j254l" event={"ID":"8f9ab0a2-4598-4893-bf8b-c216f4f4b692","Type":"ContainerDied","Data":"ca10b2070602e50e87a8fcff416910880e9d8583d446af5166db6b2b95aaf420"} Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.790479 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca10b2070602e50e87a8fcff416910880e9d8583d446af5166db6b2b95aaf420" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.790512 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j254l" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.792123 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-de17-account-create-update-jrxd2" event={"ID":"d08ecdf9-e34c-476c-99d6-f2e7db2b5129","Type":"ContainerDied","Data":"1ae66f9f36865765546a7d9f2d8e9937b20e14d413250c426f214f1513cac175"} Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.792142 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae66f9f36865765546a7d9f2d8e9937b20e14d413250c426f214f1513cac175" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.792187 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-jrxd2" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.794284 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e4f9-account-create-update-x5k9s" event={"ID":"a501566d-03dd-40b1-bda9-8c6173d9292f","Type":"ContainerDied","Data":"9fa67fa46975367a435e8b4b96640eb87916718852a59480cc6663486b093e51"} Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.794327 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fa67fa46975367a435e8b4b96640eb87916718852a59480cc6663486b093e51" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.794394 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4f9-account-create-update-x5k9s" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.800107 4890 generic.go:334] "Generic (PLEG): container finished" podID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerID="96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b" exitCode=0 Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.800175 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" event={"ID":"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9","Type":"ContainerDied","Data":"96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b"} Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.802784 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lb47b" event={"ID":"55a47ebe-8900-4913-b7a1-9988e32cc5dc","Type":"ContainerDied","Data":"1d5f65263ae9c28dc1de23989df4804842ea0684937bce630b5c6d1bcdc49888"} Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.802824 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d5f65263ae9c28dc1de23989df4804842ea0684937bce630b5c6d1bcdc49888" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.802875 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lb47b" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.813994 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4691-account-create-update-pvhx9" event={"ID":"700a77fe-9836-4979-8c95-7054c3d8d42a","Type":"ContainerDied","Data":"8e3ab3dd1c661fcb435b3f03b4efc338b020e313dd8f0f61d29c171f72fcdf9b"} Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.814178 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e3ab3dd1c661fcb435b3f03b4efc338b020e313dd8f0f61d29c171f72fcdf9b" Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.814069 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" podUID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerName="dnsmasq-dns" containerID="cri-o://8e2adafaf8744fb5b74a302119c09a9e39b7faee19d77ad021bf6516bd2ff761" gracePeriod=10 Jan 21 15:52:40 crc kubenswrapper[4890]: I0121 15:52:40.814086 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-pvhx9" Jan 21 15:52:41 crc kubenswrapper[4890]: I0121 15:52:41.825828 4890 generic.go:334] "Generic (PLEG): container finished" podID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerID="8e2adafaf8744fb5b74a302119c09a9e39b7faee19d77ad021bf6516bd2ff761" exitCode=0 Jan 21 15:52:41 crc kubenswrapper[4890]: I0121 15:52:41.825881 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" event={"ID":"04661de4-967f-4cc0-a4a3-ced72441fda3","Type":"ContainerDied","Data":"8e2adafaf8744fb5b74a302119c09a9e39b7faee19d77ad021bf6516bd2ff761"} Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.788889 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.846814 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" event={"ID":"04661de4-967f-4cc0-a4a3-ced72441fda3","Type":"ContainerDied","Data":"6fc13fb01e51715e93d3a03bb490a357fbf3677dff523b434fa3e95303e77e35"} Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.846848 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-spvst" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.846877 4890 scope.go:117] "RemoveContainer" containerID="8e2adafaf8744fb5b74a302119c09a9e39b7faee19d77ad021bf6516bd2ff761" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.850463 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" event={"ID":"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9","Type":"ContainerStarted","Data":"2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4"} Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.850671 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.862199 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n8mlq" event={"ID":"ef076f7d-7b53-4a05-8208-8dfa2ee2d415","Type":"ContainerStarted","Data":"3228c1d75258cedf57a5135ce6d6d2bc6c7abf0865a062cab2068cc01ef96f79"} Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.872385 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" podStartSLOduration=5.872327421 podStartE2EDuration="5.872327421s" podCreationTimestamp="2026-01-21 15:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:43.870712011 +0000 UTC m=+1246.232154420" watchObservedRunningTime="2026-01-21 15:52:43.872327421 +0000 UTC m=+1246.233769840" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.881343 4890 scope.go:117] "RemoveContainer" containerID="e84e0b9cadb72aa045b4212f18cd600d7adb9dcaea44a62678fa91391009c26e" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.892667 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-n8mlq" podStartSLOduration=6.193352927 podStartE2EDuration="11.892641885s" podCreationTimestamp="2026-01-21 15:52:32 +0000 UTC" firstStartedPulling="2026-01-21 15:52:37.825641238 +0000 UTC m=+1240.187083637" lastFinishedPulling="2026-01-21 15:52:43.524930186 +0000 UTC m=+1245.886372595" observedRunningTime="2026-01-21 15:52:43.887564279 +0000 UTC m=+1246.249006678" watchObservedRunningTime="2026-01-21 15:52:43.892641885 +0000 UTC m=+1246.254084304" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.910881 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-sb\") pod \"04661de4-967f-4cc0-a4a3-ced72441fda3\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.910949 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-swift-storage-0\") pod \"04661de4-967f-4cc0-a4a3-ced72441fda3\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.910979 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-config\") pod \"04661de4-967f-4cc0-a4a3-ced72441fda3\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.911057 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-svc\") pod \"04661de4-967f-4cc0-a4a3-ced72441fda3\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.911086 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-nb\") pod \"04661de4-967f-4cc0-a4a3-ced72441fda3\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.911124 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5lp8\" (UniqueName: \"kubernetes.io/projected/04661de4-967f-4cc0-a4a3-ced72441fda3-kube-api-access-p5lp8\") pod \"04661de4-967f-4cc0-a4a3-ced72441fda3\" (UID: \"04661de4-967f-4cc0-a4a3-ced72441fda3\") " Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.918694 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04661de4-967f-4cc0-a4a3-ced72441fda3-kube-api-access-p5lp8" (OuterVolumeSpecName: "kube-api-access-p5lp8") pod "04661de4-967f-4cc0-a4a3-ced72441fda3" (UID: "04661de4-967f-4cc0-a4a3-ced72441fda3"). InnerVolumeSpecName "kube-api-access-p5lp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.954253 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "04661de4-967f-4cc0-a4a3-ced72441fda3" (UID: "04661de4-967f-4cc0-a4a3-ced72441fda3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.954636 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04661de4-967f-4cc0-a4a3-ced72441fda3" (UID: "04661de4-967f-4cc0-a4a3-ced72441fda3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.956297 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "04661de4-967f-4cc0-a4a3-ced72441fda3" (UID: "04661de4-967f-4cc0-a4a3-ced72441fda3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.965627 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "04661de4-967f-4cc0-a4a3-ced72441fda3" (UID: "04661de4-967f-4cc0-a4a3-ced72441fda3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:43 crc kubenswrapper[4890]: I0121 15:52:43.967481 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-config" (OuterVolumeSpecName: "config") pod "04661de4-967f-4cc0-a4a3-ced72441fda3" (UID: "04661de4-967f-4cc0-a4a3-ced72441fda3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.012883 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.012927 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.012941 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5lp8\" (UniqueName: \"kubernetes.io/projected/04661de4-967f-4cc0-a4a3-ced72441fda3-kube-api-access-p5lp8\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.012953 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.012964 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.012975 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04661de4-967f-4cc0-a4a3-ced72441fda3-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.182005 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-spvst"] Jan 21 15:52:44 crc kubenswrapper[4890]: I0121 15:52:44.191825 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-spvst"] Jan 21 15:52:45 crc kubenswrapper[4890]: I0121 15:52:45.932869 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04661de4-967f-4cc0-a4a3-ced72441fda3" path="/var/lib/kubelet/pods/04661de4-967f-4cc0-a4a3-ced72441fda3/volumes" Jan 21 15:52:46 crc kubenswrapper[4890]: I0121 15:52:46.892283 4890 generic.go:334] "Generic (PLEG): container finished" podID="ef076f7d-7b53-4a05-8208-8dfa2ee2d415" containerID="3228c1d75258cedf57a5135ce6d6d2bc6c7abf0865a062cab2068cc01ef96f79" exitCode=0 Jan 21 15:52:46 crc kubenswrapper[4890]: I0121 15:52:46.892341 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n8mlq" event={"ID":"ef076f7d-7b53-4a05-8208-8dfa2ee2d415","Type":"ContainerDied","Data":"3228c1d75258cedf57a5135ce6d6d2bc6c7abf0865a062cab2068cc01ef96f79"} Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.220808 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.298155 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-combined-ca-bundle\") pod \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.298373 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-config-data\") pod \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.298441 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82m2p\" (UniqueName: \"kubernetes.io/projected/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-kube-api-access-82m2p\") pod \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\" (UID: \"ef076f7d-7b53-4a05-8208-8dfa2ee2d415\") " Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.303567 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-kube-api-access-82m2p" (OuterVolumeSpecName: "kube-api-access-82m2p") pod "ef076f7d-7b53-4a05-8208-8dfa2ee2d415" (UID: "ef076f7d-7b53-4a05-8208-8dfa2ee2d415"). InnerVolumeSpecName "kube-api-access-82m2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.321638 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef076f7d-7b53-4a05-8208-8dfa2ee2d415" (UID: "ef076f7d-7b53-4a05-8208-8dfa2ee2d415"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.342683 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-config-data" (OuterVolumeSpecName: "config-data") pod "ef076f7d-7b53-4a05-8208-8dfa2ee2d415" (UID: "ef076f7d-7b53-4a05-8208-8dfa2ee2d415"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.401364 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.401413 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.401426 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82m2p\" (UniqueName: \"kubernetes.io/projected/ef076f7d-7b53-4a05-8208-8dfa2ee2d415-kube-api-access-82m2p\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.762555 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.762887 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.762926 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.763682 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0a634f6e929f7ffc1800d062d4e30092fbcb2b4f2a695698fc22410e40c8906"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.763744 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://d0a634f6e929f7ffc1800d062d4e30092fbcb2b4f2a695698fc22410e40c8906" gracePeriod=600 Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.909240 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="d0a634f6e929f7ffc1800d062d4e30092fbcb2b4f2a695698fc22410e40c8906" exitCode=0 Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.909303 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"d0a634f6e929f7ffc1800d062d4e30092fbcb2b4f2a695698fc22410e40c8906"} Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.909591 4890 scope.go:117] "RemoveContainer" containerID="b81d20500077e709078904e361919a2211cb0af68d145b245b901c65377ab4de" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.912177 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n8mlq" event={"ID":"ef076f7d-7b53-4a05-8208-8dfa2ee2d415","Type":"ContainerDied","Data":"7552c39bc54171c450363d5190c4ce14f2442d912c1258a2fda6ca924e4d7d85"} Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.912275 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7552c39bc54171c450363d5190c4ce14f2442d912c1258a2fda6ca924e4d7d85" Jan 21 15:52:48 crc kubenswrapper[4890]: I0121 15:52:48.912241 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n8mlq" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.172550 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-skmdf"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.172839 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerName="dnsmasq-dns" containerID="cri-o://2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4" gracePeriod=10 Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.177276 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214218 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mxv54"] Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214663 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08ecdf9-e34c-476c-99d6-f2e7db2b5129" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214683 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08ecdf9-e34c-476c-99d6-f2e7db2b5129" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214703 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerName="dnsmasq-dns" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214712 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerName="dnsmasq-dns" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214730 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerName="init" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214740 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerName="init" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214752 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a501566d-03dd-40b1-bda9-8c6173d9292f" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214760 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="a501566d-03dd-40b1-bda9-8c6173d9292f" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214771 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700a77fe-9836-4979-8c95-7054c3d8d42a" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214779 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="700a77fe-9836-4979-8c95-7054c3d8d42a" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214788 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a47ebe-8900-4913-b7a1-9988e32cc5dc" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214795 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a47ebe-8900-4913-b7a1-9988e32cc5dc" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214817 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef076f7d-7b53-4a05-8208-8dfa2ee2d415" containerName="keystone-db-sync" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214830 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef076f7d-7b53-4a05-8208-8dfa2ee2d415" containerName="keystone-db-sync" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214845 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9ab0a2-4598-4893-bf8b-c216f4f4b692" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214852 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9ab0a2-4598-4893-bf8b-c216f4f4b692" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.214872 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013a38e6-319d-4fd9-bba3-a05b6c10acd9" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.214879 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="013a38e6-319d-4fd9-bba3-a05b6c10acd9" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215063 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="04661de4-967f-4cc0-a4a3-ced72441fda3" containerName="dnsmasq-dns" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215075 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a47ebe-8900-4913-b7a1-9988e32cc5dc" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215087 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="013a38e6-319d-4fd9-bba3-a05b6c10acd9" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215096 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef076f7d-7b53-4a05-8208-8dfa2ee2d415" containerName="keystone-db-sync" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215106 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9ab0a2-4598-4893-bf8b-c216f4f4b692" containerName="mariadb-database-create" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215117 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="700a77fe-9836-4979-8c95-7054c3d8d42a" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215127 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08ecdf9-e34c-476c-99d6-f2e7db2b5129" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215136 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="a501566d-03dd-40b1-bda9-8c6173d9292f" containerName="mariadb-account-create-update" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.215784 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.221866 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.222118 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.222262 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.222499 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.227589 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.234464 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l9jxh" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.236906 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-c2nzb"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.238822 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.267494 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mxv54"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.286200 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-c2nzb"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.324898 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.324936 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-config-data\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.324973 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325014 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprc6\" (UniqueName: \"kubernetes.io/projected/573c007e-6a9b-461e-bf72-e01c0ab6e784-kube-api-access-wprc6\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325058 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325076 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gh2\" (UniqueName: \"kubernetes.io/projected/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-kube-api-access-j6gh2\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325094 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325117 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-credential-keys\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325137 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-combined-ca-bundle\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325153 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-config\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325169 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-scripts\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.325193 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-fernet-keys\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.418179 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.429024 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.430965 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-combined-ca-bundle\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433028 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-config\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433115 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-scripts\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433231 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-fernet-keys\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433336 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433435 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-config-data\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433556 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433702 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprc6\" (UniqueName: \"kubernetes.io/projected/573c007e-6a9b-461e-bf72-e01c0ab6e784-kube-api-access-wprc6\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.433885 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.434008 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gh2\" (UniqueName: \"kubernetes.io/projected/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-kube-api-access-j6gh2\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.434126 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.434260 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-credential-keys\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.442011 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.441771 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-config\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.450369 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.454659 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.454739 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.459719 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.462652 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-fernet-keys\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.463057 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-config-data\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.466971 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.471015 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.487899 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-scripts\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.503263 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-credential-keys\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.504906 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-combined-ca-bundle\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.515165 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprc6\" (UniqueName: \"kubernetes.io/projected/573c007e-6a9b-461e-bf72-e01c0ab6e784-kube-api-access-wprc6\") pod \"dnsmasq-dns-54b4bb76d5-c2nzb\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.535648 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-config-data\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.535937 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2frzc\" (UniqueName: \"kubernetes.io/projected/4b9b0697-d7f9-404f-9311-214c97146a27-kube-api-access-2frzc\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.536070 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.536160 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.536320 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-scripts\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.536492 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-log-httpd\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.536605 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-run-httpd\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.537465 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gh2\" (UniqueName: \"kubernetes.io/projected/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-kube-api-access-j6gh2\") pod \"keystone-bootstrap-mxv54\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.542392 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ltrrf"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.544016 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.563883 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8zb5j" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.564120 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.564292 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.584018 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.605156 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ltrrf"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.638577 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-config-data\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.638787 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f467\" (UniqueName: \"kubernetes.io/projected/55d621d1-f812-4467-aeee-2ed0da3d68ac-kube-api-access-4f467\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.638865 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55d621d1-f812-4467-aeee-2ed0da3d68ac-etc-machine-id\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.638972 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-scripts\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639061 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-scripts\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639162 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-db-sync-config-data\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639225 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-combined-ca-bundle\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639301 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-log-httpd\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639422 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-run-httpd\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639490 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-config-data\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639561 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2frzc\" (UniqueName: \"kubernetes.io/projected/4b9b0697-d7f9-404f-9311-214c97146a27-kube-api-access-2frzc\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639651 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.639710 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.651497 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-run-httpd\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.656057 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-log-httpd\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.656531 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.660798 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-config-data\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.661489 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.668980 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.675918 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-f8v9z"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.676910 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-scripts\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.687034 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2frzc\" (UniqueName: \"kubernetes.io/projected/4b9b0697-d7f9-404f-9311-214c97146a27-kube-api-access-2frzc\") pod \"ceilometer-0\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.689332 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.701928 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.702159 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.702312 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sx6lz" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.713519 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-f8v9z"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.739309 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-thtbf"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.741967 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.741994 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-config-data\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742053 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-combined-ca-bundle\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742090 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f467\" (UniqueName: \"kubernetes.io/projected/55d621d1-f812-4467-aeee-2ed0da3d68ac-kube-api-access-4f467\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742115 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55d621d1-f812-4467-aeee-2ed0da3d68ac-etc-machine-id\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742154 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-scripts\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742178 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-config\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742225 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9fhh\" (UniqueName: \"kubernetes.io/projected/ced3b279-b256-483b-af6f-3b13721f1ef8-kube-api-access-k9fhh\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742248 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-db-sync-config-data\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.742269 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-combined-ca-bundle\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.754318 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-db-sync-config-data\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.761980 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-scripts\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.764840 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55d621d1-f812-4467-aeee-2ed0da3d68ac-etc-machine-id\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.764896 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.764903 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.765001 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lbvpt" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.765854 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-combined-ca-bundle\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.766547 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-thtbf"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.768461 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-config-data\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.818140 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f467\" (UniqueName: \"kubernetes.io/projected/55d621d1-f812-4467-aeee-2ed0da3d68ac-kube-api-access-4f467\") pod \"cinder-db-sync-ltrrf\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.834123 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-c2nzb"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.850916 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-scripts\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.850960 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-config\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.851020 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9fhh\" (UniqueName: \"kubernetes.io/projected/ced3b279-b256-483b-af6f-3b13721f1ef8-kube-api-access-k9fhh\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.851110 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-logs\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.851141 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-combined-ca-bundle\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.851210 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-config-data\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.851267 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwscv\" (UniqueName: \"kubernetes.io/projected/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-kube-api-access-rwscv\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.851325 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-combined-ca-bundle\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.852665 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wm8lg"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.859840 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.861591 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.862972 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mc94m" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.864440 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wm8lg"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.867501 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-config\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.874238 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-combined-ca-bundle\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.880379 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9fhh\" (UniqueName: \"kubernetes.io/projected/ced3b279-b256-483b-af6f-3b13721f1ef8-kube-api-access-k9fhh\") pod \"neutron-db-sync-f8v9z\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.885196 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.894099 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.902581 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-xmn44"] Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.903309 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerName="dnsmasq-dns" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.903327 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerName="dnsmasq-dns" Jan 21 15:52:49 crc kubenswrapper[4890]: E0121 15:52:49.903382 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerName="init" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.903394 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerName="init" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.903597 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerName="dnsmasq-dns" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.907589 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.952382 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-xmn44"] Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953112 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-sb\") pod \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953158 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-swift-storage-0\") pod \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953327 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-svc\") pod \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953384 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-config\") pod \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953446 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-nb\") pod \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953497 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp5jk\" (UniqueName: \"kubernetes.io/projected/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-kube-api-access-kp5jk\") pod \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\" (UID: \"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9\") " Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953771 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-scripts\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953845 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bs6\" (UniqueName: \"kubernetes.io/projected/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-kube-api-access-v8bs6\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953910 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-config\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.953961 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954005 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-logs\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954055 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-combined-ca-bundle\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954090 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954129 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-db-sync-config-data\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954171 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-config-data\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954199 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954239 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954281 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwscv\" (UniqueName: \"kubernetes.io/projected/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-kube-api-access-rwscv\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954393 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-combined-ca-bundle\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.954435 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66nb\" (UniqueName: \"kubernetes.io/projected/051e76fa-25e8-401b-b5e4-67feddadd6c6-kube-api-access-t66nb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.957843 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-logs\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.960920 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-kube-api-access-kp5jk" (OuterVolumeSpecName: "kube-api-access-kp5jk") pod "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" (UID: "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9"). InnerVolumeSpecName "kube-api-access-kp5jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.964575 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"6c1674a2bd424bd7189f15c6273406528477da9f8b31d68e03fb7356078df89f"} Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.971966 4890 generic.go:334] "Generic (PLEG): container finished" podID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" containerID="2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4" exitCode=0 Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.972015 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" event={"ID":"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9","Type":"ContainerDied","Data":"2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4"} Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.972046 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" event={"ID":"50b7aaa8-30dd-4230-8d74-a20cbe7f10d9","Type":"ContainerDied","Data":"2f5eecdc5d93383b5c51f9c3b20de4955cee35d44ff0fd287fea91c59312762e"} Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.972072 4890 scope.go:117] "RemoveContainer" containerID="2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.972496 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-skmdf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.983527 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-config-data\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.987542 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-combined-ca-bundle\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.997140 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-scripts\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:49 crc kubenswrapper[4890]: I0121 15:52:49.998324 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwscv\" (UniqueName: \"kubernetes.io/projected/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-kube-api-access-rwscv\") pod \"placement-db-sync-thtbf\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.002939 4890 scope.go:117] "RemoveContainer" containerID="96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.065662 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.065760 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.065798 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-db-sync-config-data\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.065831 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.065880 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.065970 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-combined-ca-bundle\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.065996 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66nb\" (UniqueName: \"kubernetes.io/projected/051e76fa-25e8-401b-b5e4-67feddadd6c6-kube-api-access-t66nb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.066095 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8bs6\" (UniqueName: \"kubernetes.io/projected/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-kube-api-access-v8bs6\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.066140 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-config\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.066392 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp5jk\" (UniqueName: \"kubernetes.io/projected/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-kube-api-access-kp5jk\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.067238 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-config\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.070764 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.071456 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.071612 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.067344 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.077707 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-db-sync-config-data\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.089229 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-combined-ca-bundle\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.099534 4890 scope.go:117] "RemoveContainer" containerID="2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4" Jan 21 15:52:50 crc kubenswrapper[4890]: E0121 15:52:50.101100 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4\": container with ID starting with 2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4 not found: ID does not exist" containerID="2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.101139 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4"} err="failed to get container status \"2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4\": rpc error: code = NotFound desc = could not find container \"2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4\": container with ID starting with 2f4b9eb2ae8a9a932dfb663b156f3772894e0a5bbceccbce6aeb26673aa4b1f4 not found: ID does not exist" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.101161 4890 scope.go:117] "RemoveContainer" containerID="96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.101807 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66nb\" (UniqueName: \"kubernetes.io/projected/051e76fa-25e8-401b-b5e4-67feddadd6c6-kube-api-access-t66nb\") pod \"dnsmasq-dns-5dc4fcdbc-xmn44\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.119969 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8bs6\" (UniqueName: \"kubernetes.io/projected/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-kube-api-access-v8bs6\") pod \"barbican-db-sync-wm8lg\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:50 crc kubenswrapper[4890]: E0121 15:52:50.120112 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b\": container with ID starting with 96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b not found: ID does not exist" containerID="96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.120143 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b"} err="failed to get container status \"96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b\": rpc error: code = NotFound desc = could not find container \"96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b\": container with ID starting with 96988a11f40d4156a7fbba6f816414da1e05df685ed48d4c0e0e12168711209b not found: ID does not exist" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.120482 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.121673 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-config" (OuterVolumeSpecName: "config") pod "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" (UID: "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.131533 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" (UID: "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.137905 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.139553 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" (UID: "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.145596 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" (UID: "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.158735 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" (UID: "50b7aaa8-30dd-4230-8d74-a20cbe7f10d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.159146 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thtbf" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.169371 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.169402 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.169417 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.169450 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.169462 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.210090 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.232531 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.374468 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.394578 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.403137 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.407543 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.407761 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.408076 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.408321 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kklp5" Jan 21 15:52:50 crc kubenswrapper[4890]: W0121 15:52:50.435590 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65c69ba1_9046_4c2b_b4ec_5d0307f991ea.slice/crio-93110c670a2400d139cbe3b58c2bb0b5d0ff6acab8c9fec94b4f316cc0ba0d0f WatchSource:0}: Error finding container 93110c670a2400d139cbe3b58c2bb0b5d0ff6acab8c9fec94b4f316cc0ba0d0f: Status 404 returned error can't find the container with id 93110c670a2400d139cbe3b58c2bb0b5d0ff6acab8c9fec94b4f316cc0ba0d0f Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.438230 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mxv54"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.474115 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-c2nzb"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.490264 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.490383 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-scripts\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.490879 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.490945 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76d49\" (UniqueName: \"kubernetes.io/projected/c248d71c-c02b-4bce-8c03-6cd7c237fad3-kube-api-access-76d49\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.490979 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.491086 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-logs\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.491170 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-config-data\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.491193 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.531053 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.559806 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.559963 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.564855 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.564993 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.598255 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-logs\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599017 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-config-data\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599055 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599100 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599208 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599297 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599378 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxsf\" (UniqueName: \"kubernetes.io/projected/26f7f3b2-ffa9-4306-b25d-16e62961439a-kube-api-access-tzxsf\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599420 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-scripts\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599456 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599478 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-logs\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599507 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599564 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76d49\" (UniqueName: \"kubernetes.io/projected/c248d71c-c02b-4bce-8c03-6cd7c237fad3-kube-api-access-76d49\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599599 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599635 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599668 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.599759 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.604668 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.604953 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-scripts\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.607149 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.607432 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-logs\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.610974 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.619593 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.619682 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-config-data\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.630108 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76d49\" (UniqueName: \"kubernetes.io/projected/c248d71c-c02b-4bce-8c03-6cd7c237fad3-kube-api-access-76d49\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.654382 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.667345 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-skmdf"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.670000 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-skmdf"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.701989 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.702086 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.702129 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzxsf\" (UniqueName: \"kubernetes.io/projected/26f7f3b2-ffa9-4306-b25d-16e62961439a-kube-api-access-tzxsf\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.702161 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.702175 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-logs\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.702207 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.702226 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.702263 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.704451 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-logs\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.705586 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.705714 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.712102 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.712309 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.713027 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.717829 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.733004 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzxsf\" (UniqueName: \"kubernetes.io/projected/26f7f3b2-ffa9-4306-b25d-16e62961439a-kube-api-access-tzxsf\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.749695 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.773266 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ltrrf"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.780461 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.930244 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.954662 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.998915 4890 generic.go:334] "Generic (PLEG): container finished" podID="573c007e-6a9b-461e-bf72-e01c0ab6e784" containerID="fad0d091c2334f97a8e7f61538dc5961276aaba0d2f7ec616808c5e768a9b77f" exitCode=0 Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.998990 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" event={"ID":"573c007e-6a9b-461e-bf72-e01c0ab6e784","Type":"ContainerDied","Data":"fad0d091c2334f97a8e7f61538dc5961276aaba0d2f7ec616808c5e768a9b77f"} Jan 21 15:52:50 crc kubenswrapper[4890]: I0121 15:52:50.999022 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" event={"ID":"573c007e-6a9b-461e-bf72-e01c0ab6e784","Type":"ContainerStarted","Data":"a9aa59e87e0ee7d2e35dce107b89ee9635aa74ffed828f8d35da715db95f157b"} Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.001650 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltrrf" event={"ID":"55d621d1-f812-4467-aeee-2ed0da3d68ac","Type":"ContainerStarted","Data":"ba5b6b738bae84a71d7aa11997f063b97f6e205f0c0f8db95d8ba9c4940a3669"} Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.007144 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerStarted","Data":"7ac868d8477e72f95aa8feff337ee4c7a8d42efada01be7e61b69e1fdec4b771"} Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.033288 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mxv54" event={"ID":"65c69ba1-9046-4c2b-b4ec-5d0307f991ea","Type":"ContainerStarted","Data":"290fcae6cfffbec443f9447fbac0c1272c7474f5ad004686e58a539395e8ba3d"} Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.033530 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mxv54" event={"ID":"65c69ba1-9046-4c2b-b4ec-5d0307f991ea","Type":"ContainerStarted","Data":"93110c670a2400d139cbe3b58c2bb0b5d0ff6acab8c9fec94b4f316cc0ba0d0f"} Jan 21 15:52:51 crc kubenswrapper[4890]: W0121 15:52:51.035942 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeb40c3a_bdb9_4fd1_a722_843b14bad9d8.slice/crio-6014805afcd1a515b4cb56ef8d7deca6644699007e9015811c0d3ee5f2b9c510 WatchSource:0}: Error finding container 6014805afcd1a515b4cb56ef8d7deca6644699007e9015811c0d3ee5f2b9c510: Status 404 returned error can't find the container with id 6014805afcd1a515b4cb56ef8d7deca6644699007e9015811c0d3ee5f2b9c510 Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.060072 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-f8v9z"] Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.073098 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-thtbf"] Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.093152 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mxv54" podStartSLOduration=2.093132928 podStartE2EDuration="2.093132928s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:51.05367114 +0000 UTC m=+1253.415113569" watchObservedRunningTime="2026-01-21 15:52:51.093132928 +0000 UTC m=+1253.454575347" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.162465 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-xmn44"] Jan 21 15:52:51 crc kubenswrapper[4890]: W0121 15:52:51.164567 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e75f4bb_e544_49f4_88ba_ed75d8d0365f.slice/crio-1e35ac5caa661e9eca9b0f584f07243c28a50d1c426f7e23c3d41390c584c5f9 WatchSource:0}: Error finding container 1e35ac5caa661e9eca9b0f584f07243c28a50d1c426f7e23c3d41390c584c5f9: Status 404 returned error can't find the container with id 1e35ac5caa661e9eca9b0f584f07243c28a50d1c426f7e23c3d41390c584c5f9 Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.178160 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wm8lg"] Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.611408 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.637395 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-sb\") pod \"573c007e-6a9b-461e-bf72-e01c0ab6e784\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.637575 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-nb\") pod \"573c007e-6a9b-461e-bf72-e01c0ab6e784\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.637658 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-svc\") pod \"573c007e-6a9b-461e-bf72-e01c0ab6e784\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.637723 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-swift-storage-0\") pod \"573c007e-6a9b-461e-bf72-e01c0ab6e784\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.641057 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-config\") pod \"573c007e-6a9b-461e-bf72-e01c0ab6e784\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.641110 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wprc6\" (UniqueName: \"kubernetes.io/projected/573c007e-6a9b-461e-bf72-e01c0ab6e784-kube-api-access-wprc6\") pod \"573c007e-6a9b-461e-bf72-e01c0ab6e784\" (UID: \"573c007e-6a9b-461e-bf72-e01c0ab6e784\") " Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.656200 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573c007e-6a9b-461e-bf72-e01c0ab6e784-kube-api-access-wprc6" (OuterVolumeSpecName: "kube-api-access-wprc6") pod "573c007e-6a9b-461e-bf72-e01c0ab6e784" (UID: "573c007e-6a9b-461e-bf72-e01c0ab6e784"). InnerVolumeSpecName "kube-api-access-wprc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.694073 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.757555 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wprc6\" (UniqueName: \"kubernetes.io/projected/573c007e-6a9b-461e-bf72-e01c0ab6e784-kube-api-access-wprc6\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.807208 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "573c007e-6a9b-461e-bf72-e01c0ab6e784" (UID: "573c007e-6a9b-461e-bf72-e01c0ab6e784"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.840570 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "573c007e-6a9b-461e-bf72-e01c0ab6e784" (UID: "573c007e-6a9b-461e-bf72-e01c0ab6e784"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.840647 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "573c007e-6a9b-461e-bf72-e01c0ab6e784" (UID: "573c007e-6a9b-461e-bf72-e01c0ab6e784"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.844220 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-config" (OuterVolumeSpecName: "config") pod "573c007e-6a9b-461e-bf72-e01c0ab6e784" (UID: "573c007e-6a9b-461e-bf72-e01c0ab6e784"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.844246 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "573c007e-6a9b-461e-bf72-e01c0ab6e784" (UID: "573c007e-6a9b-461e-bf72-e01c0ab6e784"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.864477 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.865030 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.865147 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.865417 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.865505 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573c007e-6a9b-461e-bf72-e01c0ab6e784-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.948616 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b7aaa8-30dd-4230-8d74-a20cbe7f10d9" path="/var/lib/kubelet/pods/50b7aaa8-30dd-4230-8d74-a20cbe7f10d9/volumes" Jan 21 15:52:51 crc kubenswrapper[4890]: I0121 15:52:51.949487 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.048331 4890 generic.go:334] "Generic (PLEG): container finished" podID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerID="fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa" exitCode=0 Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.048410 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" event={"ID":"051e76fa-25e8-401b-b5e4-67feddadd6c6","Type":"ContainerDied","Data":"fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.048441 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" event={"ID":"051e76fa-25e8-401b-b5e4-67feddadd6c6","Type":"ContainerStarted","Data":"74c88e2f5cec77dc7f74f64b2af7c1f7948f92130c6ea95157eab4afddf7ffed"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.053632 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26f7f3b2-ffa9-4306-b25d-16e62961439a","Type":"ContainerStarted","Data":"c70ca2c79a07f4fe25aaf1b94829f02a7a33d1a51b8b25764a9c5448f74b174e"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.060091 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c248d71c-c02b-4bce-8c03-6cd7c237fad3","Type":"ContainerStarted","Data":"2331864986f71c9081e3d21ed3902170c62303f1c99f5a603d0a19d645534e60"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.093214 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thtbf" event={"ID":"deb40c3a-bdb9-4fd1-a722-843b14bad9d8","Type":"ContainerStarted","Data":"6014805afcd1a515b4cb56ef8d7deca6644699007e9015811c0d3ee5f2b9c510"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.097001 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f8v9z" event={"ID":"ced3b279-b256-483b-af6f-3b13721f1ef8","Type":"ContainerStarted","Data":"ce09e5c3848074e56c8f1304c5eeb0f39e955c23e11561464cb386c01b9d2fe2"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.097161 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f8v9z" event={"ID":"ced3b279-b256-483b-af6f-3b13721f1ef8","Type":"ContainerStarted","Data":"7fe80c95a7ed625981b3d5c4e290e71b85c05b8b2e1ce190bfabbff93874fb98"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.101901 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.102400 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-c2nzb" event={"ID":"573c007e-6a9b-461e-bf72-e01c0ab6e784","Type":"ContainerDied","Data":"a9aa59e87e0ee7d2e35dce107b89ee9635aa74ffed828f8d35da715db95f157b"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.102460 4890 scope.go:117] "RemoveContainer" containerID="fad0d091c2334f97a8e7f61538dc5961276aaba0d2f7ec616808c5e768a9b77f" Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.111473 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm8lg" event={"ID":"5e75f4bb-e544-49f4-88ba-ed75d8d0365f","Type":"ContainerStarted","Data":"1e35ac5caa661e9eca9b0f584f07243c28a50d1c426f7e23c3d41390c584c5f9"} Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.125340 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-f8v9z" podStartSLOduration=3.125316663 podStartE2EDuration="3.125316663s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:52.118193106 +0000 UTC m=+1254.479635515" watchObservedRunningTime="2026-01-21 15:52:52.125316663 +0000 UTC m=+1254.486759072" Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.187825 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-c2nzb"] Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.211680 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-c2nzb"] Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.421130 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.455730 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:52:52 crc kubenswrapper[4890]: I0121 15:52:52.539672 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:53 crc kubenswrapper[4890]: I0121 15:52:53.195488 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" event={"ID":"051e76fa-25e8-401b-b5e4-67feddadd6c6","Type":"ContainerStarted","Data":"399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec"} Jan 21 15:52:53 crc kubenswrapper[4890]: I0121 15:52:53.196489 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:52:53 crc kubenswrapper[4890]: I0121 15:52:53.204707 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26f7f3b2-ffa9-4306-b25d-16e62961439a","Type":"ContainerStarted","Data":"30410bf534bcb11c506762faf3cd9a27780de4a9a2662fcc5852c6cf7abab505"} Jan 21 15:52:53 crc kubenswrapper[4890]: I0121 15:52:53.209716 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c248d71c-c02b-4bce-8c03-6cd7c237fad3","Type":"ContainerStarted","Data":"2ec0e7d1c1cbd03718b240214882a8996e8bba9fee521bf71d36d8b51a8a593f"} Jan 21 15:52:53 crc kubenswrapper[4890]: I0121 15:52:53.232015 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" podStartSLOduration=4.231993606 podStartE2EDuration="4.231993606s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:53.228374386 +0000 UTC m=+1255.589816795" watchObservedRunningTime="2026-01-21 15:52:53.231993606 +0000 UTC m=+1255.593436025" Jan 21 15:52:53 crc kubenswrapper[4890]: I0121 15:52:53.937886 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573c007e-6a9b-461e-bf72-e01c0ab6e784" path="/var/lib/kubelet/pods/573c007e-6a9b-461e-bf72-e01c0ab6e784/volumes" Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.233053 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c248d71c-c02b-4bce-8c03-6cd7c237fad3","Type":"ContainerStarted","Data":"d2d4dc4d069b0b5f0a6d245bfa6c12b1c7ff4fb9a8fb25c74d480e50682d62ab"} Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.233958 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-log" containerID="cri-o://2ec0e7d1c1cbd03718b240214882a8996e8bba9fee521bf71d36d8b51a8a593f" gracePeriod=30 Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.234501 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-httpd" containerID="cri-o://d2d4dc4d069b0b5f0a6d245bfa6c12b1c7ff4fb9a8fb25c74d480e50682d62ab" gracePeriod=30 Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.239010 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26f7f3b2-ffa9-4306-b25d-16e62961439a","Type":"ContainerStarted","Data":"d1f8c63aa4ced44e341cf135a82901aa8cbfcbede4666669c5c7cb8e0068e3b8"} Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.239310 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-httpd" containerID="cri-o://d1f8c63aa4ced44e341cf135a82901aa8cbfcbede4666669c5c7cb8e0068e3b8" gracePeriod=30 Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.239276 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-log" containerID="cri-o://30410bf534bcb11c506762faf3cd9a27780de4a9a2662fcc5852c6cf7abab505" gracePeriod=30 Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.276820 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.276797613 podStartE2EDuration="6.276797613s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:55.261615726 +0000 UTC m=+1257.623058125" watchObservedRunningTime="2026-01-21 15:52:55.276797613 +0000 UTC m=+1257.638240022" Jan 21 15:52:55 crc kubenswrapper[4890]: I0121 15:52:55.294523 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.294501312 podStartE2EDuration="6.294501312s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:55.285835717 +0000 UTC m=+1257.647278126" watchObservedRunningTime="2026-01-21 15:52:55.294501312 +0000 UTC m=+1257.655943721" Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.251616 4890 generic.go:334] "Generic (PLEG): container finished" podID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerID="d1f8c63aa4ced44e341cf135a82901aa8cbfcbede4666669c5c7cb8e0068e3b8" exitCode=0 Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.251928 4890 generic.go:334] "Generic (PLEG): container finished" podID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerID="30410bf534bcb11c506762faf3cd9a27780de4a9a2662fcc5852c6cf7abab505" exitCode=143 Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.251672 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26f7f3b2-ffa9-4306-b25d-16e62961439a","Type":"ContainerDied","Data":"d1f8c63aa4ced44e341cf135a82901aa8cbfcbede4666669c5c7cb8e0068e3b8"} Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.252012 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26f7f3b2-ffa9-4306-b25d-16e62961439a","Type":"ContainerDied","Data":"30410bf534bcb11c506762faf3cd9a27780de4a9a2662fcc5852c6cf7abab505"} Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.254764 4890 generic.go:334] "Generic (PLEG): container finished" podID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerID="d2d4dc4d069b0b5f0a6d245bfa6c12b1c7ff4fb9a8fb25c74d480e50682d62ab" exitCode=0 Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.254795 4890 generic.go:334] "Generic (PLEG): container finished" podID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerID="2ec0e7d1c1cbd03718b240214882a8996e8bba9fee521bf71d36d8b51a8a593f" exitCode=143 Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.254822 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c248d71c-c02b-4bce-8c03-6cd7c237fad3","Type":"ContainerDied","Data":"d2d4dc4d069b0b5f0a6d245bfa6c12b1c7ff4fb9a8fb25c74d480e50682d62ab"} Jan 21 15:52:56 crc kubenswrapper[4890]: I0121 15:52:56.254852 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c248d71c-c02b-4bce-8c03-6cd7c237fad3","Type":"ContainerDied","Data":"2ec0e7d1c1cbd03718b240214882a8996e8bba9fee521bf71d36d8b51a8a593f"} Jan 21 15:52:57 crc kubenswrapper[4890]: I0121 15:52:57.270084 4890 generic.go:334] "Generic (PLEG): container finished" podID="65c69ba1-9046-4c2b-b4ec-5d0307f991ea" containerID="290fcae6cfffbec443f9447fbac0c1272c7474f5ad004686e58a539395e8ba3d" exitCode=0 Jan 21 15:52:57 crc kubenswrapper[4890]: I0121 15:52:57.270162 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mxv54" event={"ID":"65c69ba1-9046-4c2b-b4ec-5d0307f991ea","Type":"ContainerDied","Data":"290fcae6cfffbec443f9447fbac0c1272c7474f5ad004686e58a539395e8ba3d"} Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.847670 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941672 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzxsf\" (UniqueName: \"kubernetes.io/projected/26f7f3b2-ffa9-4306-b25d-16e62961439a-kube-api-access-tzxsf\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941742 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-logs\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941781 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-config-data\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941798 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-internal-tls-certs\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941844 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-combined-ca-bundle\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941893 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-httpd-run\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941962 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-scripts\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.941981 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"26f7f3b2-ffa9-4306-b25d-16e62961439a\" (UID: \"26f7f3b2-ffa9-4306-b25d-16e62961439a\") " Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.942229 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-logs" (OuterVolumeSpecName: "logs") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.942704 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.942853 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.961611 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f7f3b2-ffa9-4306-b25d-16e62961439a-kube-api-access-tzxsf" (OuterVolumeSpecName: "kube-api-access-tzxsf") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "kube-api-access-tzxsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.962227 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.962335 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-scripts" (OuterVolumeSpecName: "scripts") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.974744 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.991628 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-config-data" (OuterVolumeSpecName: "config-data") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:58 crc kubenswrapper[4890]: I0121 15:52:58.992126 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26f7f3b2-ffa9-4306-b25d-16e62961439a" (UID: "26f7f3b2-ffa9-4306-b25d-16e62961439a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.045083 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.045135 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.045149 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzxsf\" (UniqueName: \"kubernetes.io/projected/26f7f3b2-ffa9-4306-b25d-16e62961439a-kube-api-access-tzxsf\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.045163 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.045176 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.045186 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f7f3b2-ffa9-4306-b25d-16e62961439a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.045198 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26f7f3b2-ffa9-4306-b25d-16e62961439a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.066054 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.147217 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.287497 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26f7f3b2-ffa9-4306-b25d-16e62961439a","Type":"ContainerDied","Data":"c70ca2c79a07f4fe25aaf1b94829f02a7a33d1a51b8b25764a9c5448f74b174e"} Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.287926 4890 scope.go:117] "RemoveContainer" containerID="d1f8c63aa4ced44e341cf135a82901aa8cbfcbede4666669c5c7cb8e0068e3b8" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.287543 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.325126 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.340713 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.358077 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:59 crc kubenswrapper[4890]: E0121 15:52:59.358470 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573c007e-6a9b-461e-bf72-e01c0ab6e784" containerName="init" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.358486 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="573c007e-6a9b-461e-bf72-e01c0ab6e784" containerName="init" Jan 21 15:52:59 crc kubenswrapper[4890]: E0121 15:52:59.358507 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-httpd" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.358515 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-httpd" Jan 21 15:52:59 crc kubenswrapper[4890]: E0121 15:52:59.358525 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-log" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.358533 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-log" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.358739 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="573c007e-6a9b-461e-bf72-e01c0ab6e784" containerName="init" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.358770 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-httpd" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.358782 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" containerName="glance-log" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.359699 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.363414 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.367457 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.373206 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458172 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458267 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8wq\" (UniqueName: \"kubernetes.io/projected/7e221599-8207-445a-bdbf-79cc7b21590a-kube-api-access-rl8wq\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458292 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458369 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458391 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458428 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458555 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.458630 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.559842 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.559890 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.559935 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.559996 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.560064 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.560109 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.560151 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl8wq\" (UniqueName: \"kubernetes.io/projected/7e221599-8207-445a-bdbf-79cc7b21590a-kube-api-access-rl8wq\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.560175 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.560182 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.560845 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.560873 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.566756 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.569775 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.572043 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.574213 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.584928 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl8wq\" (UniqueName: \"kubernetes.io/projected/7e221599-8207-445a-bdbf-79cc7b21590a-kube-api-access-rl8wq\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.600277 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.694860 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:52:59 crc kubenswrapper[4890]: I0121 15:52:59.927541 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f7f3b2-ffa9-4306-b25d-16e62961439a" path="/var/lib/kubelet/pods/26f7f3b2-ffa9-4306-b25d-16e62961439a/volumes" Jan 21 15:53:00 crc kubenswrapper[4890]: I0121 15:53:00.234614 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:53:00 crc kubenswrapper[4890]: I0121 15:53:00.309273 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-lwp7d"] Jan 21 15:53:00 crc kubenswrapper[4890]: I0121 15:53:00.309543 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="dnsmasq-dns" containerID="cri-o://54103974d785a15465d49e05a426b57cf3718efe8e315a495157785f5fdd81ce" gracePeriod=10 Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.311168 4890 generic.go:334] "Generic (PLEG): container finished" podID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerID="54103974d785a15465d49e05a426b57cf3718efe8e315a495157785f5fdd81ce" exitCode=0 Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.311496 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" event={"ID":"d33597fc-f17b-4c75-ad8d-2519551825f1","Type":"ContainerDied","Data":"54103974d785a15465d49e05a426b57cf3718efe8e315a495157785f5fdd81ce"} Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.507524 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.518748 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.600584 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-scripts\") pod \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.600644 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-combined-ca-bundle\") pod \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.600688 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-config-data\") pod \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.600742 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-httpd-run\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.600763 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-combined-ca-bundle\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.600823 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-credential-keys\") pod \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601122 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601377 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601411 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-config-data\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601473 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6gh2\" (UniqueName: \"kubernetes.io/projected/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-kube-api-access-j6gh2\") pod \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601544 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-logs\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601570 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-fernet-keys\") pod \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\" (UID: \"65c69ba1-9046-4c2b-b4ec-5d0307f991ea\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601616 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76d49\" (UniqueName: \"kubernetes.io/projected/c248d71c-c02b-4bce-8c03-6cd7c237fad3-kube-api-access-76d49\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601645 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-scripts\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601729 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-public-tls-certs\") pod \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\" (UID: \"c248d71c-c02b-4bce-8c03-6cd7c237fad3\") " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.601998 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-logs" (OuterVolumeSpecName: "logs") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.602313 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.602331 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c248d71c-c02b-4bce-8c03-6cd7c237fad3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.618450 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "65c69ba1-9046-4c2b-b4ec-5d0307f991ea" (UID: "65c69ba1-9046-4c2b-b4ec-5d0307f991ea"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.620057 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-kube-api-access-j6gh2" (OuterVolumeSpecName: "kube-api-access-j6gh2") pod "65c69ba1-9046-4c2b-b4ec-5d0307f991ea" (UID: "65c69ba1-9046-4c2b-b4ec-5d0307f991ea"). InnerVolumeSpecName "kube-api-access-j6gh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.620086 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.620760 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-scripts" (OuterVolumeSpecName: "scripts") pod "65c69ba1-9046-4c2b-b4ec-5d0307f991ea" (UID: "65c69ba1-9046-4c2b-b4ec-5d0307f991ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.625760 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "65c69ba1-9046-4c2b-b4ec-5d0307f991ea" (UID: "65c69ba1-9046-4c2b-b4ec-5d0307f991ea"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.625769 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c248d71c-c02b-4bce-8c03-6cd7c237fad3-kube-api-access-76d49" (OuterVolumeSpecName: "kube-api-access-76d49") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "kube-api-access-76d49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.631051 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-scripts" (OuterVolumeSpecName: "scripts") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.639629 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.652378 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-config-data" (OuterVolumeSpecName: "config-data") pod "65c69ba1-9046-4c2b-b4ec-5d0307f991ea" (UID: "65c69ba1-9046-4c2b-b4ec-5d0307f991ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.654396 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65c69ba1-9046-4c2b-b4ec-5d0307f991ea" (UID: "65c69ba1-9046-4c2b-b4ec-5d0307f991ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.669625 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.705954 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.705992 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6gh2\" (UniqueName: \"kubernetes.io/projected/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-kube-api-access-j6gh2\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706005 4890 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706016 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76d49\" (UniqueName: \"kubernetes.io/projected/c248d71c-c02b-4bce-8c03-6cd7c237fad3-kube-api-access-76d49\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706035 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706045 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706054 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706064 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706073 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706083 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.706094 4890 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65c69ba1-9046-4c2b-b4ec-5d0307f991ea-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.708020 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-config-data" (OuterVolumeSpecName: "config-data") pod "c248d71c-c02b-4bce-8c03-6cd7c237fad3" (UID: "c248d71c-c02b-4bce-8c03-6cd7c237fad3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.722690 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.808708 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:01 crc kubenswrapper[4890]: I0121 15:53:01.808745 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c248d71c-c02b-4bce-8c03-6cd7c237fad3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.320637 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.320629 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c248d71c-c02b-4bce-8c03-6cd7c237fad3","Type":"ContainerDied","Data":"2331864986f71c9081e3d21ed3902170c62303f1c99f5a603d0a19d645534e60"} Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.324884 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mxv54" event={"ID":"65c69ba1-9046-4c2b-b4ec-5d0307f991ea","Type":"ContainerDied","Data":"93110c670a2400d139cbe3b58c2bb0b5d0ff6acab8c9fec94b4f316cc0ba0d0f"} Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.324931 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93110c670a2400d139cbe3b58c2bb0b5d0ff6acab8c9fec94b4f316cc0ba0d0f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.324933 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mxv54" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.353292 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.373511 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.385475 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:53:02 crc kubenswrapper[4890]: E0121 15:53:02.385839 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-log" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.385855 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-log" Jan 21 15:53:02 crc kubenswrapper[4890]: E0121 15:53:02.385883 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c69ba1-9046-4c2b-b4ec-5d0307f991ea" containerName="keystone-bootstrap" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.385890 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c69ba1-9046-4c2b-b4ec-5d0307f991ea" containerName="keystone-bootstrap" Jan 21 15:53:02 crc kubenswrapper[4890]: E0121 15:53:02.385903 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-httpd" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.385909 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-httpd" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.386065 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-log" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.386078 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" containerName="glance-httpd" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.386091 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c69ba1-9046-4c2b-b4ec-5d0307f991ea" containerName="keystone-bootstrap" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.387134 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.392270 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.392569 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.397793 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.519764 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.519822 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.519881 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-config-data\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.519903 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-logs\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.519928 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-scripts\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.519947 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.519988 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dmg\" (UniqueName: \"kubernetes.io/projected/d57adef6-94fe-4333-bf61-5ec2e55af351-kube-api-access-f4dmg\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.520019 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.623461 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-config-data\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.624157 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-logs\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.624189 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-scripts\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.624206 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.624276 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4dmg\" (UniqueName: \"kubernetes.io/projected/d57adef6-94fe-4333-bf61-5ec2e55af351-kube-api-access-f4dmg\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.624652 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.624783 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-logs\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.625010 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.625103 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.625139 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.625225 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.628687 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-config-data\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.628924 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-scripts\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.629395 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.633783 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.646119 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4dmg\" (UniqueName: \"kubernetes.io/projected/d57adef6-94fe-4333-bf61-5ec2e55af351-kube-api-access-f4dmg\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.671119 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.674102 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mxv54"] Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.683065 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mxv54"] Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.709508 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.765553 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4x97f"] Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.766826 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.769633 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.771239 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.771750 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.772345 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l9jxh" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.772530 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.793683 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4x97f"] Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.828545 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-config-data\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.828609 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-combined-ca-bundle\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.828678 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftc4\" (UniqueName: \"kubernetes.io/projected/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-kube-api-access-tftc4\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.828710 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-credential-keys\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.828745 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-fernet-keys\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.828778 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-scripts\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.930641 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-config-data\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.930700 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-combined-ca-bundle\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.930742 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftc4\" (UniqueName: \"kubernetes.io/projected/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-kube-api-access-tftc4\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.930766 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-credential-keys\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.930788 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-fernet-keys\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.930813 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-scripts\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.934877 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-credential-keys\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.935746 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-fernet-keys\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.935983 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-scripts\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.936041 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-config-data\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.946926 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftc4\" (UniqueName: \"kubernetes.io/projected/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-kube-api-access-tftc4\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:02 crc kubenswrapper[4890]: I0121 15:53:02.952882 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-combined-ca-bundle\") pod \"keystone-bootstrap-4x97f\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:03 crc kubenswrapper[4890]: I0121 15:53:03.098990 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:03 crc kubenswrapper[4890]: I0121 15:53:03.922978 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c69ba1-9046-4c2b-b4ec-5d0307f991ea" path="/var/lib/kubelet/pods/65c69ba1-9046-4c2b-b4ec-5d0307f991ea/volumes" Jan 21 15:53:03 crc kubenswrapper[4890]: I0121 15:53:03.923653 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c248d71c-c02b-4bce-8c03-6cd7c237fad3" path="/var/lib/kubelet/pods/c248d71c-c02b-4bce-8c03-6cd7c237fad3/volumes" Jan 21 15:53:07 crc kubenswrapper[4890]: I0121 15:53:07.898842 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Jan 21 15:53:10 crc kubenswrapper[4890]: E0121 15:53:10.240870 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 21 15:53:10 crc kubenswrapper[4890]: E0121 15:53:10.241631 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8bs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wm8lg_openstack(5e75f4bb-e544-49f4-88ba-ed75d8d0365f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:53:10 crc kubenswrapper[4890]: E0121 15:53:10.243446 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wm8lg" podUID="5e75f4bb-e544-49f4-88ba-ed75d8d0365f" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.353106 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.398037 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" event={"ID":"d33597fc-f17b-4c75-ad8d-2519551825f1","Type":"ContainerDied","Data":"20b5942db7803fdbbd887b8716094b589b3f59c8e42f3fcb91b4231e626bab7b"} Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.398174 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" Jan 21 15:53:10 crc kubenswrapper[4890]: E0121 15:53:10.401733 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-wm8lg" podUID="5e75f4bb-e544-49f4-88ba-ed75d8d0365f" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.470443 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-config\") pod \"d33597fc-f17b-4c75-ad8d-2519551825f1\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.470664 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-sb\") pod \"d33597fc-f17b-4c75-ad8d-2519551825f1\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.470749 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnxts\" (UniqueName: \"kubernetes.io/projected/d33597fc-f17b-4c75-ad8d-2519551825f1-kube-api-access-jnxts\") pod \"d33597fc-f17b-4c75-ad8d-2519551825f1\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.470777 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-nb\") pod \"d33597fc-f17b-4c75-ad8d-2519551825f1\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.470836 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-dns-svc\") pod \"d33597fc-f17b-4c75-ad8d-2519551825f1\" (UID: \"d33597fc-f17b-4c75-ad8d-2519551825f1\") " Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.485504 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33597fc-f17b-4c75-ad8d-2519551825f1-kube-api-access-jnxts" (OuterVolumeSpecName: "kube-api-access-jnxts") pod "d33597fc-f17b-4c75-ad8d-2519551825f1" (UID: "d33597fc-f17b-4c75-ad8d-2519551825f1"). InnerVolumeSpecName "kube-api-access-jnxts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.515283 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d33597fc-f17b-4c75-ad8d-2519551825f1" (UID: "d33597fc-f17b-4c75-ad8d-2519551825f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.518986 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-config" (OuterVolumeSpecName: "config") pod "d33597fc-f17b-4c75-ad8d-2519551825f1" (UID: "d33597fc-f17b-4c75-ad8d-2519551825f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.519882 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d33597fc-f17b-4c75-ad8d-2519551825f1" (UID: "d33597fc-f17b-4c75-ad8d-2519551825f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.530538 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d33597fc-f17b-4c75-ad8d-2519551825f1" (UID: "d33597fc-f17b-4c75-ad8d-2519551825f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.572985 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.573026 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnxts\" (UniqueName: \"kubernetes.io/projected/d33597fc-f17b-4c75-ad8d-2519551825f1-kube-api-access-jnxts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.573038 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.573047 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.573057 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33597fc-f17b-4c75-ad8d-2519551825f1-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.737472 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-lwp7d"] Jan 21 15:53:10 crc kubenswrapper[4890]: I0121 15:53:10.746030 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-lwp7d"] Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.333431 4890 scope.go:117] "RemoveContainer" containerID="30410bf534bcb11c506762faf3cd9a27780de4a9a2662fcc5852c6cf7abab505" Jan 21 15:53:11 crc kubenswrapper[4890]: E0121 15:53:11.375285 4890 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 21 15:53:11 crc kubenswrapper[4890]: E0121 15:53:11.376675 4890 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f467,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ltrrf_openstack(55d621d1-f812-4467-aeee-2ed0da3d68ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:53:11 crc kubenswrapper[4890]: E0121 15:53:11.378131 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ltrrf" podUID="55d621d1-f812-4467-aeee-2ed0da3d68ac" Jan 21 15:53:11 crc kubenswrapper[4890]: E0121 15:53:11.413456 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-ltrrf" podUID="55d621d1-f812-4467-aeee-2ed0da3d68ac" Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.413758 4890 scope.go:117] "RemoveContainer" containerID="d2d4dc4d069b0b5f0a6d245bfa6c12b1c7ff4fb9a8fb25c74d480e50682d62ab" Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.544056 4890 scope.go:117] "RemoveContainer" containerID="2ec0e7d1c1cbd03718b240214882a8996e8bba9fee521bf71d36d8b51a8a593f" Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.586055 4890 scope.go:117] "RemoveContainer" containerID="54103974d785a15465d49e05a426b57cf3718efe8e315a495157785f5fdd81ce" Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.617268 4890 scope.go:117] "RemoveContainer" containerID="59497b223247831b279cdfa5ba3775de27963179a11e66f62b77dfe3cf22bb5a" Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.900374 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4x97f"] Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.936929 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" path="/var/lib/kubelet/pods/d33597fc-f17b-4c75-ad8d-2519551825f1/volumes" Jan 21 15:53:11 crc kubenswrapper[4890]: I0121 15:53:11.995113 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:53:11 crc kubenswrapper[4890]: W0121 15:53:11.997113 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e221599_8207_445a_bdbf_79cc7b21590a.slice/crio-1f914164fb7e43008552114c062f804aa5ac5217e8e9840209a21d53bc2bd24a WatchSource:0}: Error finding container 1f914164fb7e43008552114c062f804aa5ac5217e8e9840209a21d53bc2bd24a: Status 404 returned error can't find the container with id 1f914164fb7e43008552114c062f804aa5ac5217e8e9840209a21d53bc2bd24a Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.419032 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4x97f" event={"ID":"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3","Type":"ContainerStarted","Data":"fe578b5a9120cabe848837eb1fe2b519560f4c2d26dee5d0a871fa17df5e83f7"} Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.419329 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4x97f" event={"ID":"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3","Type":"ContainerStarted","Data":"cfd52469547b9dfdd8e06634409aac3aab480da34df8ba374d4f0805970ab982"} Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.428840 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thtbf" event={"ID":"deb40c3a-bdb9-4fd1-a722-843b14bad9d8","Type":"ContainerStarted","Data":"37c37359b700ae7dab497ac93d747b6bf79f54edc4ddd64a222b2e24e26b0e48"} Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.432310 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerStarted","Data":"a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe"} Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.440000 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e221599-8207-445a-bdbf-79cc7b21590a","Type":"ContainerStarted","Data":"1f914164fb7e43008552114c062f804aa5ac5217e8e9840209a21d53bc2bd24a"} Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.452689 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4x97f" podStartSLOduration=10.452661399 podStartE2EDuration="10.452661399s" podCreationTimestamp="2026-01-21 15:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:12.43895677 +0000 UTC m=+1274.800399179" watchObservedRunningTime="2026-01-21 15:53:12.452661399 +0000 UTC m=+1274.814103808" Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.464668 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-thtbf" podStartSLOduration=4.246798784 podStartE2EDuration="23.464613766s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="2026-01-21 15:52:51.040981125 +0000 UTC m=+1253.402423534" lastFinishedPulling="2026-01-21 15:53:10.258796117 +0000 UTC m=+1272.620238516" observedRunningTime="2026-01-21 15:53:12.462113654 +0000 UTC m=+1274.823556063" watchObservedRunningTime="2026-01-21 15:53:12.464613766 +0000 UTC m=+1274.826056185" Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.520132 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:53:12 crc kubenswrapper[4890]: I0121 15:53:12.900619 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-lwp7d" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Jan 21 15:53:13 crc kubenswrapper[4890]: I0121 15:53:13.453464 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e221599-8207-445a-bdbf-79cc7b21590a","Type":"ContainerStarted","Data":"7a84f64e5ecf6747684f4ee3339734668618656034a8696a2604e4233cf17843"} Jan 21 15:53:13 crc kubenswrapper[4890]: I0121 15:53:13.455278 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e221599-8207-445a-bdbf-79cc7b21590a","Type":"ContainerStarted","Data":"f24965e1823d8ec92fee963545e1ae34491cd2ced770b9228ea5dd223fd108a6"} Jan 21 15:53:13 crc kubenswrapper[4890]: I0121 15:53:13.455774 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d57adef6-94fe-4333-bf61-5ec2e55af351","Type":"ContainerStarted","Data":"dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0"} Jan 21 15:53:13 crc kubenswrapper[4890]: I0121 15:53:13.455883 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d57adef6-94fe-4333-bf61-5ec2e55af351","Type":"ContainerStarted","Data":"75fa82c99fc465605faade3472d1c9134d40b37995440c5e8b4d227d6dd65722"} Jan 21 15:53:14 crc kubenswrapper[4890]: I0121 15:53:14.481012 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerStarted","Data":"7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c"} Jan 21 15:53:14 crc kubenswrapper[4890]: I0121 15:53:14.483134 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d57adef6-94fe-4333-bf61-5ec2e55af351","Type":"ContainerStarted","Data":"fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419"} Jan 21 15:53:14 crc kubenswrapper[4890]: I0121 15:53:14.526309 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.52628982 podStartE2EDuration="12.52628982s" podCreationTimestamp="2026-01-21 15:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:14.521617244 +0000 UTC m=+1276.883059653" watchObservedRunningTime="2026-01-21 15:53:14.52628982 +0000 UTC m=+1276.887732229" Jan 21 15:53:14 crc kubenswrapper[4890]: I0121 15:53:14.549463 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.549444394 podStartE2EDuration="15.549444394s" podCreationTimestamp="2026-01-21 15:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:14.541846425 +0000 UTC m=+1276.903288844" watchObservedRunningTime="2026-01-21 15:53:14.549444394 +0000 UTC m=+1276.910886803" Jan 21 15:53:17 crc kubenswrapper[4890]: I0121 15:53:17.508950 4890 generic.go:334] "Generic (PLEG): container finished" podID="deb40c3a-bdb9-4fd1-a722-843b14bad9d8" containerID="37c37359b700ae7dab497ac93d747b6bf79f54edc4ddd64a222b2e24e26b0e48" exitCode=0 Jan 21 15:53:17 crc kubenswrapper[4890]: I0121 15:53:17.509374 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thtbf" event={"ID":"deb40c3a-bdb9-4fd1-a722-843b14bad9d8","Type":"ContainerDied","Data":"37c37359b700ae7dab497ac93d747b6bf79f54edc4ddd64a222b2e24e26b0e48"} Jan 21 15:53:17 crc kubenswrapper[4890]: I0121 15:53:17.511970 4890 generic.go:334] "Generic (PLEG): container finished" podID="da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" containerID="fe578b5a9120cabe848837eb1fe2b519560f4c2d26dee5d0a871fa17df5e83f7" exitCode=0 Jan 21 15:53:17 crc kubenswrapper[4890]: I0121 15:53:17.512015 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4x97f" event={"ID":"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3","Type":"ContainerDied","Data":"fe578b5a9120cabe848837eb1fe2b519560f4c2d26dee5d0a871fa17df5e83f7"} Jan 21 15:53:18 crc kubenswrapper[4890]: I0121 15:53:18.522815 4890 generic.go:334] "Generic (PLEG): container finished" podID="ced3b279-b256-483b-af6f-3b13721f1ef8" containerID="ce09e5c3848074e56c8f1304c5eeb0f39e955c23e11561464cb386c01b9d2fe2" exitCode=0 Jan 21 15:53:18 crc kubenswrapper[4890]: I0121 15:53:18.522981 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f8v9z" event={"ID":"ced3b279-b256-483b-af6f-3b13721f1ef8","Type":"ContainerDied","Data":"ce09e5c3848074e56c8f1304c5eeb0f39e955c23e11561464cb386c01b9d2fe2"} Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.484318 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thtbf" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.490572 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.536012 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4x97f" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.535955 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4x97f" event={"ID":"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3","Type":"ContainerDied","Data":"cfd52469547b9dfdd8e06634409aac3aab480da34df8ba374d4f0805970ab982"} Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.538541 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd52469547b9dfdd8e06634409aac3aab480da34df8ba374d4f0805970ab982" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.543523 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thtbf" event={"ID":"deb40c3a-bdb9-4fd1-a722-843b14bad9d8","Type":"ContainerDied","Data":"6014805afcd1a515b4cb56ef8d7deca6644699007e9015811c0d3ee5f2b9c510"} Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.543570 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6014805afcd1a515b4cb56ef8d7deca6644699007e9015811c0d3ee5f2b9c510" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.543574 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thtbf" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.631642 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-688fbc5db-f9csp"] Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638629 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwscv\" (UniqueName: \"kubernetes.io/projected/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-kube-api-access-rwscv\") pod \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638701 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-combined-ca-bundle\") pod \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638750 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-config-data\") pod \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " Jan 21 15:53:19 crc kubenswrapper[4890]: E0121 15:53:19.638769 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="dnsmasq-dns" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638795 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="dnsmasq-dns" Jan 21 15:53:19 crc kubenswrapper[4890]: E0121 15:53:19.638820 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" containerName="keystone-bootstrap" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638830 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" containerName="keystone-bootstrap" Jan 21 15:53:19 crc kubenswrapper[4890]: E0121 15:53:19.638840 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="init" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638848 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="init" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638861 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-scripts\") pod \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " Jan 21 15:53:19 crc kubenswrapper[4890]: E0121 15:53:19.638871 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb40c3a-bdb9-4fd1-a722-843b14bad9d8" containerName="placement-db-sync" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638880 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb40c3a-bdb9-4fd1-a722-843b14bad9d8" containerName="placement-db-sync" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638906 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-fernet-keys\") pod \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.638994 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-credential-keys\") pod \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639045 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-combined-ca-bundle\") pod \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639139 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-logs\") pod \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639189 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-scripts\") pod \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639262 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tftc4\" (UniqueName: \"kubernetes.io/projected/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-kube-api-access-tftc4\") pod \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\" (UID: \"da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639303 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-config-data\") pod \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\" (UID: \"deb40c3a-bdb9-4fd1-a722-843b14bad9d8\") " Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639775 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" containerName="keystone-bootstrap" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639813 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb40c3a-bdb9-4fd1-a722-843b14bad9d8" containerName="placement-db-sync" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.639844 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33597fc-f17b-4c75-ad8d-2519551825f1" containerName="dnsmasq-dns" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.643674 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-logs" (OuterVolumeSpecName: "logs") pod "deb40c3a-bdb9-4fd1-a722-843b14bad9d8" (UID: "deb40c3a-bdb9-4fd1-a722-843b14bad9d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.649250 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" (UID: "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.650076 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" (UID: "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.654334 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.656139 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-scripts" (OuterVolumeSpecName: "scripts") pod "deb40c3a-bdb9-4fd1-a722-843b14bad9d8" (UID: "deb40c3a-bdb9-4fd1-a722-843b14bad9d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.660113 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.663853 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.707685 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-scripts" (OuterVolumeSpecName: "scripts") pod "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" (UID: "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.707751 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.707756 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-kube-api-access-tftc4" (OuterVolumeSpecName: "kube-api-access-tftc4") pod "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" (UID: "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3"). InnerVolumeSpecName "kube-api-access-tftc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.707792 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.709529 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-kube-api-access-rwscv" (OuterVolumeSpecName: "kube-api-access-rwscv") pod "deb40c3a-bdb9-4fd1-a722-843b14bad9d8" (UID: "deb40c3a-bdb9-4fd1-a722-843b14bad9d8"). InnerVolumeSpecName "kube-api-access-rwscv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.723427 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-config-data" (OuterVolumeSpecName: "config-data") pod "deb40c3a-bdb9-4fd1-a722-843b14bad9d8" (UID: "deb40c3a-bdb9-4fd1-a722-843b14bad9d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.730271 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb40c3a-bdb9-4fd1-a722-843b14bad9d8" (UID: "deb40c3a-bdb9-4fd1-a722-843b14bad9d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.730322 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-config-data" (OuterVolumeSpecName: "config-data") pod "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" (UID: "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746443 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a25c82c-f72c-4ecb-a760-a568761bd5f2-logs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746497 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxz48\" (UniqueName: \"kubernetes.io/projected/2a25c82c-f72c-4ecb-a760-a568761bd5f2-kube-api-access-nxz48\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746552 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-scripts\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746574 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-config-data\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746618 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-public-tls-certs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746687 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-internal-tls-certs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746709 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-combined-ca-bundle\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746779 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746793 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746805 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tftc4\" (UniqueName: \"kubernetes.io/projected/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-kube-api-access-tftc4\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746816 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746826 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwscv\" (UniqueName: \"kubernetes.io/projected/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-kube-api-access-rwscv\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746836 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746845 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746855 4890 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746865 4890 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.746875 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb40c3a-bdb9-4fd1-a722-843b14bad9d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.757511 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" (UID: "da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.757585 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-688fbc5db-f9csp"] Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.766421 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.775988 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5d6cd7788b-hrbst"] Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.779272 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.779394 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.783737 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.787896 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.819009 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d6cd7788b-hrbst"] Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.848588 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-scripts\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.848632 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-config-data\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.848696 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-public-tls-certs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.848792 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-internal-tls-certs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.848819 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-combined-ca-bundle\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.848914 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a25c82c-f72c-4ecb-a760-a568761bd5f2-logs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.848955 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxz48\" (UniqueName: \"kubernetes.io/projected/2a25c82c-f72c-4ecb-a760-a568761bd5f2-kube-api-access-nxz48\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.849019 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.852504 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a25c82c-f72c-4ecb-a760-a568761bd5f2-logs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.858934 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-public-tls-certs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.863294 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-config-data\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.869951 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-combined-ca-bundle\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.870456 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-internal-tls-certs\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.871755 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxz48\" (UniqueName: \"kubernetes.io/projected/2a25c82c-f72c-4ecb-a760-a568761bd5f2-kube-api-access-nxz48\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.887888 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-scripts\") pod \"placement-688fbc5db-f9csp\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.926975 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950442 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-config-data\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950492 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-public-tls-certs\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950517 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68w8r\" (UniqueName: \"kubernetes.io/projected/db0e4f67-3406-4153-9fb3-3553f6fccad1-kube-api-access-68w8r\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950541 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-combined-ca-bundle\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950587 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-scripts\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950615 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-fernet-keys\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950657 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-internal-tls-certs\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:19 crc kubenswrapper[4890]: I0121 15:53:19.950690 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-credential-keys\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.012999 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.052247 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9fhh\" (UniqueName: \"kubernetes.io/projected/ced3b279-b256-483b-af6f-3b13721f1ef8-kube-api-access-k9fhh\") pod \"ced3b279-b256-483b-af6f-3b13721f1ef8\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.052317 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-combined-ca-bundle\") pod \"ced3b279-b256-483b-af6f-3b13721f1ef8\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.052477 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-config\") pod \"ced3b279-b256-483b-af6f-3b13721f1ef8\" (UID: \"ced3b279-b256-483b-af6f-3b13721f1ef8\") " Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.052820 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-scripts\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.052891 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-fernet-keys\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.052967 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-internal-tls-certs\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.053027 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-credential-keys\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.053107 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-config-data\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.053160 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-public-tls-certs\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.053193 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68w8r\" (UniqueName: \"kubernetes.io/projected/db0e4f67-3406-4153-9fb3-3553f6fccad1-kube-api-access-68w8r\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.053227 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-combined-ca-bundle\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.056795 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced3b279-b256-483b-af6f-3b13721f1ef8-kube-api-access-k9fhh" (OuterVolumeSpecName: "kube-api-access-k9fhh") pod "ced3b279-b256-483b-af6f-3b13721f1ef8" (UID: "ced3b279-b256-483b-af6f-3b13721f1ef8"). InnerVolumeSpecName "kube-api-access-k9fhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.057401 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-scripts\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.057786 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-fernet-keys\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.058804 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-credential-keys\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.059246 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-config-data\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.060763 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-internal-tls-certs\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.071121 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-public-tls-certs\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.072631 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-combined-ca-bundle\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.073582 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68w8r\" (UniqueName: \"kubernetes.io/projected/db0e4f67-3406-4153-9fb3-3553f6fccad1-kube-api-access-68w8r\") pod \"keystone-5d6cd7788b-hrbst\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.090757 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ced3b279-b256-483b-af6f-3b13721f1ef8" (UID: "ced3b279-b256-483b-af6f-3b13721f1ef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.090790 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-config" (OuterVolumeSpecName: "config") pod "ced3b279-b256-483b-af6f-3b13721f1ef8" (UID: "ced3b279-b256-483b-af6f-3b13721f1ef8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.119423 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.155581 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.155618 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9fhh\" (UniqueName: \"kubernetes.io/projected/ced3b279-b256-483b-af6f-3b13721f1ef8-kube-api-access-k9fhh\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.155631 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced3b279-b256-483b-af6f-3b13721f1ef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.496826 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-688fbc5db-f9csp"] Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.558763 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerStarted","Data":"8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7"} Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.561889 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f8v9z" event={"ID":"ced3b279-b256-483b-af6f-3b13721f1ef8","Type":"ContainerDied","Data":"7fe80c95a7ed625981b3d5c4e290e71b85c05b8b2e1ce190bfabbff93874fb98"} Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.561937 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe80c95a7ed625981b3d5c4e290e71b85c05b8b2e1ce190bfabbff93874fb98" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.561972 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f8v9z" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.576547 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-688fbc5db-f9csp" event={"ID":"2a25c82c-f72c-4ecb-a760-a568761bd5f2","Type":"ContainerStarted","Data":"088703b7d204342f58913c79423558d243e3908868214c9cb4562bed6d264701"} Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.576592 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.576700 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.625525 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d6cd7788b-hrbst"] Jan 21 15:53:20 crc kubenswrapper[4890]: W0121 15:53:20.637320 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb0e4f67_3406_4153_9fb3_3553f6fccad1.slice/crio-ba23d8edb5f63f3200279a3e78acf00fc988ae93e6b1b092da859323084799f0 WatchSource:0}: Error finding container ba23d8edb5f63f3200279a3e78acf00fc988ae93e6b1b092da859323084799f0: Status 404 returned error can't find the container with id ba23d8edb5f63f3200279a3e78acf00fc988ae93e6b1b092da859323084799f0 Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.762497 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-cbzsv"] Jan 21 15:53:20 crc kubenswrapper[4890]: E0121 15:53:20.763326 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced3b279-b256-483b-af6f-3b13721f1ef8" containerName="neutron-db-sync" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.763366 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced3b279-b256-483b-af6f-3b13721f1ef8" containerName="neutron-db-sync" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.763589 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced3b279-b256-483b-af6f-3b13721f1ef8" containerName="neutron-db-sync" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.764995 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.798759 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-cbzsv"] Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.880189 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-config\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.880229 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.880261 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjzf7\" (UniqueName: \"kubernetes.io/projected/53e0153f-134a-480a-9612-c0cd342594c5-kube-api-access-bjzf7\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.880282 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.883715 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.883917 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.900913 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bc6b59f74-wjlfx"] Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.906596 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.909976 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.910196 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.910412 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.910620 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sx6lz" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.914606 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bc6b59f74-wjlfx"] Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986339 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cv7\" (UniqueName: \"kubernetes.io/projected/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-kube-api-access-z9cv7\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986495 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986525 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-ovndb-tls-certs\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986562 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-config\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986623 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-config\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986650 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986680 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-combined-ca-bundle\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986707 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjzf7\" (UniqueName: \"kubernetes.io/projected/53e0153f-134a-480a-9612-c0cd342594c5-kube-api-access-bjzf7\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986726 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986758 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-httpd-config\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.986784 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.987815 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.988601 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.988660 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.989265 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-config\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:20 crc kubenswrapper[4890]: I0121 15:53:20.992729 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.008901 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjzf7\" (UniqueName: \"kubernetes.io/projected/53e0153f-134a-480a-9612-c0cd342594c5-kube-api-access-bjzf7\") pod \"dnsmasq-dns-6b9c8b59c-cbzsv\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.088343 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9cv7\" (UniqueName: \"kubernetes.io/projected/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-kube-api-access-z9cv7\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.088413 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-ovndb-tls-certs\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.088462 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-config\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.088564 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-combined-ca-bundle\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.088618 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-httpd-config\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.088637 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.094249 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-combined-ca-bundle\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.097888 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-ovndb-tls-certs\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.098828 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-config\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.107086 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9cv7\" (UniqueName: \"kubernetes.io/projected/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-kube-api-access-z9cv7\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.107645 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-httpd-config\") pod \"neutron-7bc6b59f74-wjlfx\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.243740 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.524409 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-cbzsv"] Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.584667 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" event={"ID":"53e0153f-134a-480a-9612-c0cd342594c5","Type":"ContainerStarted","Data":"65469e8ed8aafa25071a79c3d72d8b3c158c31214c3568cd3f732676d01c7504"} Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.586919 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-688fbc5db-f9csp" event={"ID":"2a25c82c-f72c-4ecb-a760-a568761bd5f2","Type":"ContainerStarted","Data":"1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1"} Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.586942 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-688fbc5db-f9csp" event={"ID":"2a25c82c-f72c-4ecb-a760-a568761bd5f2","Type":"ContainerStarted","Data":"98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac"} Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.588697 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d6cd7788b-hrbst" event={"ID":"db0e4f67-3406-4153-9fb3-3553f6fccad1","Type":"ContainerStarted","Data":"ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f"} Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.588747 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d6cd7788b-hrbst" event={"ID":"db0e4f67-3406-4153-9fb3-3553f6fccad1","Type":"ContainerStarted","Data":"ba23d8edb5f63f3200279a3e78acf00fc988ae93e6b1b092da859323084799f0"} Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.588781 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.615524 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5d6cd7788b-hrbst" podStartSLOduration=2.615502484 podStartE2EDuration="2.615502484s" podCreationTimestamp="2026-01-21 15:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:21.604272805 +0000 UTC m=+1283.965715224" watchObservedRunningTime="2026-01-21 15:53:21.615502484 +0000 UTC m=+1283.976944893" Jan 21 15:53:21 crc kubenswrapper[4890]: W0121 15:53:21.832428 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e7a9d2f_f167_4b8a_a107_c1264ba6c4b8.slice/crio-43ad9ca3720a90c93940cfedc082803f78f521a89371ac3d3bfc4383a0373797 WatchSource:0}: Error finding container 43ad9ca3720a90c93940cfedc082803f78f521a89371ac3d3bfc4383a0373797: Status 404 returned error can't find the container with id 43ad9ca3720a90c93940cfedc082803f78f521a89371ac3d3bfc4383a0373797 Jan 21 15:53:21 crc kubenswrapper[4890]: I0121 15:53:21.838109 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bc6b59f74-wjlfx"] Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.599233 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc6b59f74-wjlfx" event={"ID":"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8","Type":"ContainerStarted","Data":"43ad9ca3720a90c93940cfedc082803f78f521a89371ac3d3bfc4383a0373797"} Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.599975 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.600124 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.600834 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.600969 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.626571 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-688fbc5db-f9csp" podStartSLOduration=3.626548846 podStartE2EDuration="3.626548846s" podCreationTimestamp="2026-01-21 15:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:22.624168026 +0000 UTC m=+1284.985610445" watchObservedRunningTime="2026-01-21 15:53:22.626548846 +0000 UTC m=+1284.987991265" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.709964 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.712467 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.740877 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 15:53:22 crc kubenswrapper[4890]: I0121 15:53:22.769242 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 15:53:23 crc kubenswrapper[4890]: I0121 15:53:23.166915 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:23 crc kubenswrapper[4890]: I0121 15:53:23.169896 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 15:53:23 crc kubenswrapper[4890]: I0121 15:53:23.609471 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc6b59f74-wjlfx" event={"ID":"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8","Type":"ContainerStarted","Data":"0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48"} Jan 21 15:53:23 crc kubenswrapper[4890]: I0121 15:53:23.611735 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" event={"ID":"53e0153f-134a-480a-9612-c0cd342594c5","Type":"ContainerStarted","Data":"99a269376f3d61532e81882e440bd69f0806de9201603b30a97c15670bbae6d5"} Jan 21 15:53:23 crc kubenswrapper[4890]: I0121 15:53:23.612973 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 15:53:23 crc kubenswrapper[4890]: I0121 15:53:23.613011 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.089197 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5585884bc-vnz4h"] Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.091819 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.098251 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.098263 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.107688 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5585884bc-vnz4h"] Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.172866 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-ovndb-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.172902 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-internal-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.172958 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-config\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.172973 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w64p\" (UniqueName: \"kubernetes.io/projected/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-kube-api-access-2w64p\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.173002 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-public-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.173062 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-combined-ca-bundle\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.173114 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-httpd-config\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.274519 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-public-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.274591 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-combined-ca-bundle\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.274632 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-httpd-config\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.274675 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-ovndb-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.274693 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-internal-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.274742 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-config\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.274760 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w64p\" (UniqueName: \"kubernetes.io/projected/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-kube-api-access-2w64p\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.280912 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-internal-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.281662 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-httpd-config\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.283390 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-config\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.284146 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-public-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.287197 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-combined-ca-bundle\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.299867 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w64p\" (UniqueName: \"kubernetes.io/projected/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-kube-api-access-2w64p\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.302044 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-ovndb-tls-certs\") pod \"neutron-5585884bc-vnz4h\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.408261 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.625127 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc6b59f74-wjlfx" event={"ID":"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8","Type":"ContainerStarted","Data":"e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f"} Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.625294 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.630709 4890 generic.go:334] "Generic (PLEG): container finished" podID="53e0153f-134a-480a-9612-c0cd342594c5" containerID="99a269376f3d61532e81882e440bd69f0806de9201603b30a97c15670bbae6d5" exitCode=0 Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.630838 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" event={"ID":"53e0153f-134a-480a-9612-c0cd342594c5","Type":"ContainerDied","Data":"99a269376f3d61532e81882e440bd69f0806de9201603b30a97c15670bbae6d5"} Jan 21 15:53:24 crc kubenswrapper[4890]: I0121 15:53:24.649111 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bc6b59f74-wjlfx" podStartSLOduration=4.649091419 podStartE2EDuration="4.649091419s" podCreationTimestamp="2026-01-21 15:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:24.648095904 +0000 UTC m=+1287.009538323" watchObservedRunningTime="2026-01-21 15:53:24.649091419 +0000 UTC m=+1287.010533828" Jan 21 15:53:25 crc kubenswrapper[4890]: I0121 15:53:25.641661 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:53:25 crc kubenswrapper[4890]: I0121 15:53:25.642155 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:53:26 crc kubenswrapper[4890]: I0121 15:53:26.255291 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 15:53:26 crc kubenswrapper[4890]: I0121 15:53:26.255657 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 15:53:26 crc kubenswrapper[4890]: I0121 15:53:26.448763 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5585884bc-vnz4h"] Jan 21 15:53:26 crc kubenswrapper[4890]: W0121 15:53:26.481244 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod902e1b21_9fb7_4302_b0f7_a832c7a42ca1.slice/crio-b9fb54e5f3f01a90ef7fef579e16b77f15fdb5d1bc2a563906f67031e84995b7 WatchSource:0}: Error finding container b9fb54e5f3f01a90ef7fef579e16b77f15fdb5d1bc2a563906f67031e84995b7: Status 404 returned error can't find the container with id b9fb54e5f3f01a90ef7fef579e16b77f15fdb5d1bc2a563906f67031e84995b7 Jan 21 15:53:26 crc kubenswrapper[4890]: I0121 15:53:26.652019 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5585884bc-vnz4h" event={"ID":"902e1b21-9fb7-4302-b0f7-a832c7a42ca1","Type":"ContainerStarted","Data":"b9fb54e5f3f01a90ef7fef579e16b77f15fdb5d1bc2a563906f67031e84995b7"} Jan 21 15:53:26 crc kubenswrapper[4890]: I0121 15:53:26.655180 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" event={"ID":"53e0153f-134a-480a-9612-c0cd342594c5","Type":"ContainerStarted","Data":"f7af32f3b549ff9c597f62cbdae56ad477fa8bc3a6f8183f6ee62dcfb55b8bba"} Jan 21 15:53:26 crc kubenswrapper[4890]: I0121 15:53:26.656013 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:26 crc kubenswrapper[4890]: I0121 15:53:26.697560 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" podStartSLOduration=6.697537636 podStartE2EDuration="6.697537636s" podCreationTimestamp="2026-01-21 15:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:26.678536554 +0000 UTC m=+1289.039978963" watchObservedRunningTime="2026-01-21 15:53:26.697537636 +0000 UTC m=+1289.058980115" Jan 21 15:53:27 crc kubenswrapper[4890]: I0121 15:53:27.667683 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5585884bc-vnz4h" event={"ID":"902e1b21-9fb7-4302-b0f7-a832c7a42ca1","Type":"ContainerStarted","Data":"7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322"} Jan 21 15:53:28 crc kubenswrapper[4890]: I0121 15:53:28.678387 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltrrf" event={"ID":"55d621d1-f812-4467-aeee-2ed0da3d68ac","Type":"ContainerStarted","Data":"c851675434b92477b334529677e77e7111d37132ce27fc8672bf63399697507c"} Jan 21 15:53:28 crc kubenswrapper[4890]: I0121 15:53:28.701708 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ltrrf" podStartSLOduration=4.006426863 podStartE2EDuration="39.701693473s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="2026-01-21 15:52:50.788077964 +0000 UTC m=+1253.149520373" lastFinishedPulling="2026-01-21 15:53:26.483344574 +0000 UTC m=+1288.844786983" observedRunningTime="2026-01-21 15:53:28.697745295 +0000 UTC m=+1291.059187714" watchObservedRunningTime="2026-01-21 15:53:28.701693473 +0000 UTC m=+1291.063135882" Jan 21 15:53:31 crc kubenswrapper[4890]: I0121 15:53:31.090509 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:31 crc kubenswrapper[4890]: I0121 15:53:31.168630 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-xmn44"] Jan 21 15:53:31 crc kubenswrapper[4890]: I0121 15:53:31.168844 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" podUID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerName="dnsmasq-dns" containerID="cri-o://399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec" gracePeriod=10 Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.190404 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.333536 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-nb\") pod \"051e76fa-25e8-401b-b5e4-67feddadd6c6\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.333578 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-sb\") pod \"051e76fa-25e8-401b-b5e4-67feddadd6c6\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.333690 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-svc\") pod \"051e76fa-25e8-401b-b5e4-67feddadd6c6\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.333762 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t66nb\" (UniqueName: \"kubernetes.io/projected/051e76fa-25e8-401b-b5e4-67feddadd6c6-kube-api-access-t66nb\") pod \"051e76fa-25e8-401b-b5e4-67feddadd6c6\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.333800 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-config\") pod \"051e76fa-25e8-401b-b5e4-67feddadd6c6\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.333885 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-swift-storage-0\") pod \"051e76fa-25e8-401b-b5e4-67feddadd6c6\" (UID: \"051e76fa-25e8-401b-b5e4-67feddadd6c6\") " Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.341903 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051e76fa-25e8-401b-b5e4-67feddadd6c6-kube-api-access-t66nb" (OuterVolumeSpecName: "kube-api-access-t66nb") pod "051e76fa-25e8-401b-b5e4-67feddadd6c6" (UID: "051e76fa-25e8-401b-b5e4-67feddadd6c6"). InnerVolumeSpecName "kube-api-access-t66nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.382937 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "051e76fa-25e8-401b-b5e4-67feddadd6c6" (UID: "051e76fa-25e8-401b-b5e4-67feddadd6c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.384134 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "051e76fa-25e8-401b-b5e4-67feddadd6c6" (UID: "051e76fa-25e8-401b-b5e4-67feddadd6c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.398165 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "051e76fa-25e8-401b-b5e4-67feddadd6c6" (UID: "051e76fa-25e8-401b-b5e4-67feddadd6c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.398700 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "051e76fa-25e8-401b-b5e4-67feddadd6c6" (UID: "051e76fa-25e8-401b-b5e4-67feddadd6c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.409958 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-config" (OuterVolumeSpecName: "config") pod "051e76fa-25e8-401b-b5e4-67feddadd6c6" (UID: "051e76fa-25e8-401b-b5e4-67feddadd6c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.435883 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.435924 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t66nb\" (UniqueName: \"kubernetes.io/projected/051e76fa-25e8-401b-b5e4-67feddadd6c6-kube-api-access-t66nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.435938 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.435951 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.435963 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.435975 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051e76fa-25e8-401b-b5e4-67feddadd6c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.715594 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm8lg" event={"ID":"5e75f4bb-e544-49f4-88ba-ed75d8d0365f","Type":"ContainerStarted","Data":"00f82df8616c87c0912d9dbe1f4399e94dfccbba1ce9e11654e84e00c004fd71"} Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.717603 4890 generic.go:334] "Generic (PLEG): container finished" podID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerID="399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec" exitCode=0 Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.717677 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.717679 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" event={"ID":"051e76fa-25e8-401b-b5e4-67feddadd6c6","Type":"ContainerDied","Data":"399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec"} Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.717952 4890 scope.go:117] "RemoveContainer" containerID="399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.717870 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-xmn44" event={"ID":"051e76fa-25e8-401b-b5e4-67feddadd6c6","Type":"ContainerDied","Data":"74c88e2f5cec77dc7f74f64b2af7c1f7948f92130c6ea95157eab4afddf7ffed"} Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.720516 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5585884bc-vnz4h" event={"ID":"902e1b21-9fb7-4302-b0f7-a832c7a42ca1","Type":"ContainerStarted","Data":"af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503"} Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.721581 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.723853 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerStarted","Data":"b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8"} Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.724055 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-central-agent" containerID="cri-o://a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe" gracePeriod=30 Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.724424 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.724483 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="proxy-httpd" containerID="cri-o://b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8" gracePeriod=30 Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.724540 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="sg-core" containerID="cri-o://8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7" gracePeriod=30 Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.724604 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-notification-agent" containerID="cri-o://7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c" gracePeriod=30 Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.748170 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wm8lg" podStartSLOduration=8.307908989 podStartE2EDuration="43.748141685s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="2026-01-21 15:52:51.169241586 +0000 UTC m=+1253.530683995" lastFinishedPulling="2026-01-21 15:53:26.609474282 +0000 UTC m=+1288.970916691" observedRunningTime="2026-01-21 15:53:32.732196179 +0000 UTC m=+1295.093638608" watchObservedRunningTime="2026-01-21 15:53:32.748141685 +0000 UTC m=+1295.109584104" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.769035 4890 scope.go:117] "RemoveContainer" containerID="fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.774925 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5585884bc-vnz4h" podStartSLOduration=8.774902538 podStartE2EDuration="8.774902538s" podCreationTimestamp="2026-01-21 15:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:32.771241827 +0000 UTC m=+1295.132684276" watchObservedRunningTime="2026-01-21 15:53:32.774902538 +0000 UTC m=+1295.136344947" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.805692 4890 scope.go:117] "RemoveContainer" containerID="399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec" Jan 21 15:53:32 crc kubenswrapper[4890]: E0121 15:53:32.809801 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec\": container with ID starting with 399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec not found: ID does not exist" containerID="399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.809890 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec"} err="failed to get container status \"399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec\": rpc error: code = NotFound desc = could not find container \"399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec\": container with ID starting with 399caefb69d410aec3068b4d07493d194baa548222e73aea247f89d00e3427ec not found: ID does not exist" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.809922 4890 scope.go:117] "RemoveContainer" containerID="fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa" Jan 21 15:53:32 crc kubenswrapper[4890]: E0121 15:53:32.813948 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa\": container with ID starting with fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa not found: ID does not exist" containerID="fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.813991 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa"} err="failed to get container status \"fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa\": rpc error: code = NotFound desc = could not find container \"fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa\": container with ID starting with fbc7041ffeb985811bbab253664f54addd86aba7de420f425294de95be0968fa not found: ID does not exist" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.818998 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.588986804 podStartE2EDuration="43.818979061s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="2026-01-21 15:52:50.786135115 +0000 UTC m=+1253.147577534" lastFinishedPulling="2026-01-21 15:53:32.016127382 +0000 UTC m=+1294.377569791" observedRunningTime="2026-01-21 15:53:32.80479829 +0000 UTC m=+1295.166240689" watchObservedRunningTime="2026-01-21 15:53:32.818979061 +0000 UTC m=+1295.180421470" Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.839372 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-xmn44"] Jan 21 15:53:32 crc kubenswrapper[4890]: I0121 15:53:32.849059 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-xmn44"] Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.733501 4890 generic.go:334] "Generic (PLEG): container finished" podID="4b9b0697-d7f9-404f-9311-214c97146a27" containerID="b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8" exitCode=0 Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.733533 4890 generic.go:334] "Generic (PLEG): container finished" podID="4b9b0697-d7f9-404f-9311-214c97146a27" containerID="8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7" exitCode=2 Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.733539 4890 generic.go:334] "Generic (PLEG): container finished" podID="4b9b0697-d7f9-404f-9311-214c97146a27" containerID="a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe" exitCode=0 Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.733574 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerDied","Data":"b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8"} Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.733599 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerDied","Data":"8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7"} Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.733609 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerDied","Data":"a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe"} Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.735435 4890 generic.go:334] "Generic (PLEG): container finished" podID="55d621d1-f812-4467-aeee-2ed0da3d68ac" containerID="c851675434b92477b334529677e77e7111d37132ce27fc8672bf63399697507c" exitCode=0 Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.735497 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltrrf" event={"ID":"55d621d1-f812-4467-aeee-2ed0da3d68ac","Type":"ContainerDied","Data":"c851675434b92477b334529677e77e7111d37132ce27fc8672bf63399697507c"} Jan 21 15:53:33 crc kubenswrapper[4890]: I0121 15:53:33.931085 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051e76fa-25e8-401b-b5e4-67feddadd6c6" path="/var/lib/kubelet/pods/051e76fa-25e8-401b-b5e4-67feddadd6c6/volumes" Jan 21 15:53:34 crc kubenswrapper[4890]: I0121 15:53:34.748451 4890 generic.go:334] "Generic (PLEG): container finished" podID="5e75f4bb-e544-49f4-88ba-ed75d8d0365f" containerID="00f82df8616c87c0912d9dbe1f4399e94dfccbba1ce9e11654e84e00c004fd71" exitCode=0 Jan 21 15:53:34 crc kubenswrapper[4890]: I0121 15:53:34.748507 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm8lg" event={"ID":"5e75f4bb-e544-49f4-88ba-ed75d8d0365f","Type":"ContainerDied","Data":"00f82df8616c87c0912d9dbe1f4399e94dfccbba1ce9e11654e84e00c004fd71"} Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.071854 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.192468 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f467\" (UniqueName: \"kubernetes.io/projected/55d621d1-f812-4467-aeee-2ed0da3d68ac-kube-api-access-4f467\") pod \"55d621d1-f812-4467-aeee-2ed0da3d68ac\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.192917 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-config-data\") pod \"55d621d1-f812-4467-aeee-2ed0da3d68ac\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.193055 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-db-sync-config-data\") pod \"55d621d1-f812-4467-aeee-2ed0da3d68ac\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.193145 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55d621d1-f812-4467-aeee-2ed0da3d68ac-etc-machine-id\") pod \"55d621d1-f812-4467-aeee-2ed0da3d68ac\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.193196 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-combined-ca-bundle\") pod \"55d621d1-f812-4467-aeee-2ed0da3d68ac\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.193217 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-scripts\") pod \"55d621d1-f812-4467-aeee-2ed0da3d68ac\" (UID: \"55d621d1-f812-4467-aeee-2ed0da3d68ac\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.193344 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d621d1-f812-4467-aeee-2ed0da3d68ac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "55d621d1-f812-4467-aeee-2ed0da3d68ac" (UID: "55d621d1-f812-4467-aeee-2ed0da3d68ac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.194154 4890 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55d621d1-f812-4467-aeee-2ed0da3d68ac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.198679 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-scripts" (OuterVolumeSpecName: "scripts") pod "55d621d1-f812-4467-aeee-2ed0da3d68ac" (UID: "55d621d1-f812-4467-aeee-2ed0da3d68ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.199708 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "55d621d1-f812-4467-aeee-2ed0da3d68ac" (UID: "55d621d1-f812-4467-aeee-2ed0da3d68ac"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.200571 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d621d1-f812-4467-aeee-2ed0da3d68ac-kube-api-access-4f467" (OuterVolumeSpecName: "kube-api-access-4f467") pod "55d621d1-f812-4467-aeee-2ed0da3d68ac" (UID: "55d621d1-f812-4467-aeee-2ed0da3d68ac"). InnerVolumeSpecName "kube-api-access-4f467". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.225559 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55d621d1-f812-4467-aeee-2ed0da3d68ac" (UID: "55d621d1-f812-4467-aeee-2ed0da3d68ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.257583 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-config-data" (OuterVolumeSpecName: "config-data") pod "55d621d1-f812-4467-aeee-2ed0da3d68ac" (UID: "55d621d1-f812-4467-aeee-2ed0da3d68ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.296262 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.296296 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.296306 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f467\" (UniqueName: \"kubernetes.io/projected/55d621d1-f812-4467-aeee-2ed0da3d68ac-kube-api-access-4f467\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.296316 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.296325 4890 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55d621d1-f812-4467-aeee-2ed0da3d68ac-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.749426 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.759638 4890 generic.go:334] "Generic (PLEG): container finished" podID="4b9b0697-d7f9-404f-9311-214c97146a27" containerID="7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c" exitCode=0 Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.759712 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerDied","Data":"7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c"} Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.759741 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b9b0697-d7f9-404f-9311-214c97146a27","Type":"ContainerDied","Data":"7ac868d8477e72f95aa8feff337ee4c7a8d42efada01be7e61b69e1fdec4b771"} Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.759714 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.759760 4890 scope.go:117] "RemoveContainer" containerID="b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.761196 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltrrf" event={"ID":"55d621d1-f812-4467-aeee-2ed0da3d68ac","Type":"ContainerDied","Data":"ba5b6b738bae84a71d7aa11997f063b97f6e205f0c0f8db95d8ba9c4940a3669"} Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.761225 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltrrf" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.761252 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba5b6b738bae84a71d7aa11997f063b97f6e205f0c0f8db95d8ba9c4940a3669" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.788877 4890 scope.go:117] "RemoveContainer" containerID="8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.823101 4890 scope.go:117] "RemoveContainer" containerID="7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.843186 4890 scope.go:117] "RemoveContainer" containerID="a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.861697 4890 scope.go:117] "RemoveContainer" containerID="b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8" Jan 21 15:53:35 crc kubenswrapper[4890]: E0121 15:53:35.862113 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8\": container with ID starting with b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8 not found: ID does not exist" containerID="b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.862139 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8"} err="failed to get container status \"b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8\": rpc error: code = NotFound desc = could not find container \"b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8\": container with ID starting with b4651d1a0ccce8bf02a9deaee789fd68f7017f2e53e83e99f613a165ab4f1eb8 not found: ID does not exist" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.862161 4890 scope.go:117] "RemoveContainer" containerID="8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7" Jan 21 15:53:35 crc kubenswrapper[4890]: E0121 15:53:35.862507 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7\": container with ID starting with 8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7 not found: ID does not exist" containerID="8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.862525 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7"} err="failed to get container status \"8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7\": rpc error: code = NotFound desc = could not find container \"8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7\": container with ID starting with 8b68bedac9ab15871e569e858f606655876b61203574b4c8f75face62b0225e7 not found: ID does not exist" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.862543 4890 scope.go:117] "RemoveContainer" containerID="7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c" Jan 21 15:53:35 crc kubenswrapper[4890]: E0121 15:53:35.862747 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c\": container with ID starting with 7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c not found: ID does not exist" containerID="7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.862767 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c"} err="failed to get container status \"7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c\": rpc error: code = NotFound desc = could not find container \"7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c\": container with ID starting with 7620815578db7785dd32f09969e2cb1460583a108db23305dda53a1d0dd9ad5c not found: ID does not exist" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.862782 4890 scope.go:117] "RemoveContainer" containerID="a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe" Jan 21 15:53:35 crc kubenswrapper[4890]: E0121 15:53:35.862938 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe\": container with ID starting with a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe not found: ID does not exist" containerID="a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.862954 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe"} err="failed to get container status \"a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe\": rpc error: code = NotFound desc = could not find container \"a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe\": container with ID starting with a2c1cbf3be038f80a87723a2669d6d5e5dbe911a25dd77c169f4234d188a0bbe not found: ID does not exist" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.911412 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-config-data\") pod \"4b9b0697-d7f9-404f-9311-214c97146a27\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.911758 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-sg-core-conf-yaml\") pod \"4b9b0697-d7f9-404f-9311-214c97146a27\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.911809 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-log-httpd\") pod \"4b9b0697-d7f9-404f-9311-214c97146a27\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.911846 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-run-httpd\") pod \"4b9b0697-d7f9-404f-9311-214c97146a27\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.911938 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-combined-ca-bundle\") pod \"4b9b0697-d7f9-404f-9311-214c97146a27\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.911964 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2frzc\" (UniqueName: \"kubernetes.io/projected/4b9b0697-d7f9-404f-9311-214c97146a27-kube-api-access-2frzc\") pod \"4b9b0697-d7f9-404f-9311-214c97146a27\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.912114 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-scripts\") pod \"4b9b0697-d7f9-404f-9311-214c97146a27\" (UID: \"4b9b0697-d7f9-404f-9311-214c97146a27\") " Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.914820 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b9b0697-d7f9-404f-9311-214c97146a27" (UID: "4b9b0697-d7f9-404f-9311-214c97146a27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.915439 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b9b0697-d7f9-404f-9311-214c97146a27" (UID: "4b9b0697-d7f9-404f-9311-214c97146a27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.922211 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9b0697-d7f9-404f-9311-214c97146a27-kube-api-access-2frzc" (OuterVolumeSpecName: "kube-api-access-2frzc") pod "4b9b0697-d7f9-404f-9311-214c97146a27" (UID: "4b9b0697-d7f9-404f-9311-214c97146a27"). InnerVolumeSpecName "kube-api-access-2frzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.922941 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-scripts" (OuterVolumeSpecName: "scripts") pod "4b9b0697-d7f9-404f-9311-214c97146a27" (UID: "4b9b0697-d7f9-404f-9311-214c97146a27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:35 crc kubenswrapper[4890]: I0121 15:53:35.962838 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b9b0697-d7f9-404f-9311-214c97146a27" (UID: "4b9b0697-d7f9-404f-9311-214c97146a27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.019306 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.019368 4890 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.019386 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.019398 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b9b0697-d7f9-404f-9311-214c97146a27-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.019409 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2frzc\" (UniqueName: \"kubernetes.io/projected/4b9b0697-d7f9-404f-9311-214c97146a27-kube-api-access-2frzc\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.030073 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b9b0697-d7f9-404f-9311-214c97146a27" (UID: "4b9b0697-d7f9-404f-9311-214c97146a27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.036001 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-config-data" (OuterVolumeSpecName: "config-data") pod "4b9b0697-d7f9-404f-9311-214c97146a27" (UID: "4b9b0697-d7f9-404f-9311-214c97146a27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.120631 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.120661 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b0697-d7f9-404f-9311-214c97146a27-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.120681 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.121023 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerName="dnsmasq-dns" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121040 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerName="dnsmasq-dns" Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.121058 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="proxy-httpd" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121066 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="proxy-httpd" Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.121077 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerName="init" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121083 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerName="init" Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.121099 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d621d1-f812-4467-aeee-2ed0da3d68ac" containerName="cinder-db-sync" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121105 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d621d1-f812-4467-aeee-2ed0da3d68ac" containerName="cinder-db-sync" Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.121119 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-central-agent" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121125 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-central-agent" Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.121141 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-notification-agent" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121148 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-notification-agent" Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.121168 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="sg-core" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121175 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="sg-core" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121314 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d621d1-f812-4467-aeee-2ed0da3d68ac" containerName="cinder-db-sync" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121339 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="051e76fa-25e8-401b-b5e4-67feddadd6c6" containerName="dnsmasq-dns" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121383 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-central-agent" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121395 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="sg-core" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121403 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="ceilometer-notification-agent" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.121422 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" containerName="proxy-httpd" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.122536 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.125188 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.130387 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.132031 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.132850 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.132969 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8zb5j" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.167767 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.205433 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.222889 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.223004 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv96k\" (UniqueName: \"kubernetes.io/projected/1ee26918-8402-4dfe-a822-90ccc15dcefd-kube-api-access-qv96k\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.223047 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.223098 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.223118 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.223314 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee26918-8402-4dfe-a822-90ccc15dcefd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.223542 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.236904 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-fs8gq"] Jan 21 15:53:36 crc kubenswrapper[4890]: E0121 15:53:36.237320 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e75f4bb-e544-49f4-88ba-ed75d8d0365f" containerName="barbican-db-sync" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.237337 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e75f4bb-e544-49f4-88ba-ed75d8d0365f" containerName="barbican-db-sync" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.237588 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e75f4bb-e544-49f4-88ba-ed75d8d0365f" containerName="barbican-db-sync" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.238549 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.325795 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-db-sync-config-data\") pod \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.326207 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8bs6\" (UniqueName: \"kubernetes.io/projected/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-kube-api-access-v8bs6\") pod \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.326309 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-combined-ca-bundle\") pod \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\" (UID: \"5e75f4bb-e544-49f4-88ba-ed75d8d0365f\") " Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.326868 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv96k\" (UniqueName: \"kubernetes.io/projected/1ee26918-8402-4dfe-a822-90ccc15dcefd-kube-api-access-qv96k\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.326905 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.326955 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.326982 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.327008 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee26918-8402-4dfe-a822-90ccc15dcefd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.327045 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.349818 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.350951 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.352501 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee26918-8402-4dfe-a822-90ccc15dcefd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.355031 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.358864 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.364405 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-kube-api-access-v8bs6" (OuterVolumeSpecName: "kube-api-access-v8bs6") pod "5e75f4bb-e544-49f4-88ba-ed75d8d0365f" (UID: "5e75f4bb-e544-49f4-88ba-ed75d8d0365f"). InnerVolumeSpecName "kube-api-access-v8bs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.364756 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.364897 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.365769 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.367104 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-fs8gq"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.375094 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.376329 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5e75f4bb-e544-49f4-88ba-ed75d8d0365f" (UID: "5e75f4bb-e544-49f4-88ba-ed75d8d0365f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.377393 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.382256 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e75f4bb-e544-49f4-88ba-ed75d8d0365f" (UID: "5e75f4bb-e544-49f4-88ba-ed75d8d0365f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.382799 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv96k\" (UniqueName: \"kubernetes.io/projected/1ee26918-8402-4dfe-a822-90ccc15dcefd-kube-api-access-qv96k\") pod \"cinder-scheduler-0\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429265 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-svc\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429452 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-swift-storage-0\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429501 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-nb\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429558 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jb87\" (UniqueName: \"kubernetes.io/projected/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-kube-api-access-4jb87\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429594 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-config\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429627 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-sb\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429697 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8bs6\" (UniqueName: \"kubernetes.io/projected/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-kube-api-access-v8bs6\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429713 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.429726 4890 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e75f4bb-e544-49f4-88ba-ed75d8d0365f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.448630 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.450389 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.453145 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.461978 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.499381 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531182 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-config-data\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531222 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwn6\" (UniqueName: \"kubernetes.io/projected/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-kube-api-access-2fwn6\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531238 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c356dd-aad0-4de6-bf7e-8d0031f22429-logs\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531263 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-nb\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531285 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-log-httpd\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531309 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531477 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jb87\" (UniqueName: \"kubernetes.io/projected/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-kube-api-access-4jb87\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531526 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531600 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-config\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531631 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdmr\" (UniqueName: \"kubernetes.io/projected/58c356dd-aad0-4de6-bf7e-8d0031f22429-kube-api-access-8bdmr\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531665 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-sb\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531685 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-scripts\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531720 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58c356dd-aad0-4de6-bf7e-8d0031f22429-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531738 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-run-httpd\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531774 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531805 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-svc\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531868 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-scripts\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531911 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data-custom\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.531935 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.532671 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-nb\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.532886 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-swift-storage-0\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.532921 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-config\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.533507 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-sb\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.533780 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-swift-storage-0\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.534323 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-svc\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.555756 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jb87\" (UniqueName: \"kubernetes.io/projected/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-kube-api-access-4jb87\") pod \"dnsmasq-dns-9f5756c4f-fs8gq\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.596785 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634268 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634317 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdmr\" (UniqueName: \"kubernetes.io/projected/58c356dd-aad0-4de6-bf7e-8d0031f22429-kube-api-access-8bdmr\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634343 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-scripts\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634378 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58c356dd-aad0-4de6-bf7e-8d0031f22429-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634393 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-run-httpd\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634416 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634451 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-scripts\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634470 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data-custom\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634484 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634538 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-config-data\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634555 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwn6\" (UniqueName: \"kubernetes.io/projected/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-kube-api-access-2fwn6\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634570 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c356dd-aad0-4de6-bf7e-8d0031f22429-logs\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634590 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-log-httpd\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.634609 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.636242 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c356dd-aad0-4de6-bf7e-8d0031f22429-logs\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.636839 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58c356dd-aad0-4de6-bf7e-8d0031f22429-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.637205 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-run-httpd\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.637861 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-log-httpd\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.639642 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-scripts\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.641570 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.641906 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-scripts\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.642676 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-config-data\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.644486 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data-custom\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.645169 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.645570 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.646518 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.656093 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdmr\" (UniqueName: \"kubernetes.io/projected/58c356dd-aad0-4de6-bf7e-8d0031f22429-kube-api-access-8bdmr\") pod \"cinder-api-0\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.657298 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwn6\" (UniqueName: \"kubernetes.io/projected/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-kube-api-access-2fwn6\") pod \"ceilometer-0\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.777056 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm8lg" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.777016 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm8lg" event={"ID":"5e75f4bb-e544-49f4-88ba-ed75d8d0365f","Type":"ContainerDied","Data":"1e35ac5caa661e9eca9b0f584f07243c28a50d1c426f7e23c3d41390c584c5f9"} Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.777108 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e35ac5caa661e9eca9b0f584f07243c28a50d1c426f7e23c3d41390c584c5f9" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.810260 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.812491 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.991460 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5969dffb49-ng442"] Jan 21 15:53:36 crc kubenswrapper[4890]: I0121 15:53:36.993319 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.006180 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.006406 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mc94m" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.006534 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.029433 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.068585 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5969dffb49-ng442"] Jan 21 15:53:37 crc kubenswrapper[4890]: W0121 15:53:37.083627 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ee26918_8402_4dfe_a822_90ccc15dcefd.slice/crio-be18516abdf549bad047557a50fda9bdb989c2049d4d8505e4d7bb2a4fee8f02 WatchSource:0}: Error finding container be18516abdf549bad047557a50fda9bdb989c2049d4d8505e4d7bb2a4fee8f02: Status 404 returned error can't find the container with id be18516abdf549bad047557a50fda9bdb989c2049d4d8505e4d7bb2a4fee8f02 Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.167521 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-846846cd4b-wmjvw"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.168965 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.171459 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181214 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-combined-ca-bundle\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181260 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181313 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3466f4b-2d63-490d-bae0-0921a4874daa-logs\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181408 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm2s6\" (UniqueName: \"kubernetes.io/projected/d3466f4b-2d63-490d-bae0-0921a4874daa-kube-api-access-hm2s6\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181466 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-combined-ca-bundle\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181487 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x94t\" (UniqueName: \"kubernetes.io/projected/0365c802-8af2-4230-a2e7-90959d273419-kube-api-access-9x94t\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181511 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data-custom\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181546 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data-custom\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181627 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.181654 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0365c802-8af2-4230-a2e7-90959d273419-logs\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.202883 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-846846cd4b-wmjvw"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.237564 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-fs8gq"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.280858 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-fs8gq"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.282564 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm2s6\" (UniqueName: \"kubernetes.io/projected/d3466f4b-2d63-490d-bae0-0921a4874daa-kube-api-access-hm2s6\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.282626 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-combined-ca-bundle\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.282647 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x94t\" (UniqueName: \"kubernetes.io/projected/0365c802-8af2-4230-a2e7-90959d273419-kube-api-access-9x94t\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.284852 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data-custom\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.284915 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data-custom\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.285011 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.285035 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0365c802-8af2-4230-a2e7-90959d273419-logs\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.287221 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0365c802-8af2-4230-a2e7-90959d273419-logs\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.290820 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data-custom\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.291196 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-combined-ca-bundle\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.291227 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.291301 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3466f4b-2d63-490d-bae0-0921a4874daa-logs\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.291585 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-combined-ca-bundle\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.291760 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3466f4b-2d63-490d-bae0-0921a4874daa-logs\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.297400 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.301123 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-combined-ca-bundle\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.302139 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.308091 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data-custom\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.317908 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm2s6\" (UniqueName: \"kubernetes.io/projected/d3466f4b-2d63-490d-bae0-0921a4874daa-kube-api-access-hm2s6\") pod \"barbican-worker-5969dffb49-ng442\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.319799 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-z8r2k"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.321326 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.330131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x94t\" (UniqueName: \"kubernetes.io/projected/0365c802-8af2-4230-a2e7-90959d273419-kube-api-access-9x94t\") pod \"barbican-keystone-listener-846846cd4b-wmjvw\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.336882 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-z8r2k"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.365813 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b57ffb-952ln"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.368562 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.376418 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.382779 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b57ffb-952ln"] Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392378 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbcsl\" (UniqueName: \"kubernetes.io/projected/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-kube-api-access-zbcsl\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392466 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392497 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392527 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-logs\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392563 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392617 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-combined-ca-bundle\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392661 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-config\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392691 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392720 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392756 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsb2f\" (UniqueName: \"kubernetes.io/projected/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-kube-api-access-jsb2f\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.392788 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data-custom\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.405566 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.476117 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:37 crc kubenswrapper[4890]: W0121 15:53:37.484087 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c356dd_aad0_4de6_bf7e_8d0031f22429.slice/crio-272634da8c7c6e8999aed0ce11b42dca32d70d671b3f657c7a445e2ca2720267 WatchSource:0}: Error finding container 272634da8c7c6e8999aed0ce11b42dca32d70d671b3f657c7a445e2ca2720267: Status 404 returned error can't find the container with id 272634da8c7c6e8999aed0ce11b42dca32d70d671b3f657c7a445e2ca2720267 Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494035 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494101 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494142 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsb2f\" (UniqueName: \"kubernetes.io/projected/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-kube-api-access-jsb2f\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494181 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data-custom\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494260 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbcsl\" (UniqueName: \"kubernetes.io/projected/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-kube-api-access-zbcsl\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494312 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494341 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494394 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-logs\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494436 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494481 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-combined-ca-bundle\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.494554 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-config\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.495239 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-logs\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.495418 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.495866 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.497184 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.498135 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-config\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.499041 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.499101 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data-custom\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.505927 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.507661 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-combined-ca-bundle\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.516742 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsb2f\" (UniqueName: \"kubernetes.io/projected/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-kube-api-access-jsb2f\") pod \"dnsmasq-dns-75bfc9b94f-z8r2k\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.516921 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbcsl\" (UniqueName: \"kubernetes.io/projected/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-kube-api-access-zbcsl\") pod \"barbican-api-6b57ffb-952ln\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.606768 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.615766 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:53:37 crc kubenswrapper[4890]: W0121 15:53:37.620761 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd89dc199_2b6a_4d15_a8f2_ccdc8ed32786.slice/crio-72fa0a40efd47ed30c6ebd0094e93c847fe8f502aaf5bb204883dc9dc2400f8f WatchSource:0}: Error finding container 72fa0a40efd47ed30c6ebd0094e93c847fe8f502aaf5bb204883dc9dc2400f8f: Status 404 returned error can't find the container with id 72fa0a40efd47ed30c6ebd0094e93c847fe8f502aaf5bb204883dc9dc2400f8f Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.646593 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.692704 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.843279 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ee26918-8402-4dfe-a822-90ccc15dcefd","Type":"ContainerStarted","Data":"be18516abdf549bad047557a50fda9bdb989c2049d4d8505e4d7bb2a4fee8f02"} Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.854131 4890 generic.go:334] "Generic (PLEG): container finished" podID="6c6d41ae-0a22-4a83-a064-2df2a1ab9709" containerID="1f125f4fa65fd179c104723f80db7095fc7a0d4b02427ee165df20c016d42c9a" exitCode=0 Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.854253 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" event={"ID":"6c6d41ae-0a22-4a83-a064-2df2a1ab9709","Type":"ContainerDied","Data":"1f125f4fa65fd179c104723f80db7095fc7a0d4b02427ee165df20c016d42c9a"} Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.854287 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" event={"ID":"6c6d41ae-0a22-4a83-a064-2df2a1ab9709","Type":"ContainerStarted","Data":"871d937840f52e8df4a405598ecf9656703fdad44b07d69668559ac22356a163"} Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.861451 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerStarted","Data":"72fa0a40efd47ed30c6ebd0094e93c847fe8f502aaf5bb204883dc9dc2400f8f"} Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.864163 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58c356dd-aad0-4de6-bf7e-8d0031f22429","Type":"ContainerStarted","Data":"272634da8c7c6e8999aed0ce11b42dca32d70d671b3f657c7a445e2ca2720267"} Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.960726 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9b0697-d7f9-404f-9311-214c97146a27" path="/var/lib/kubelet/pods/4b9b0697-d7f9-404f-9311-214c97146a27/volumes" Jan 21 15:53:37 crc kubenswrapper[4890]: I0121 15:53:37.962505 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5969dffb49-ng442"] Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.114697 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-846846cd4b-wmjvw"] Jan 21 15:53:38 crc kubenswrapper[4890]: W0121 15:53:38.126370 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0365c802_8af2_4230_a2e7_90959d273419.slice/crio-06150ec105d0e875a593ee4b4872021c717cf9b30faea083da88f68204d4e18f WatchSource:0}: Error finding container 06150ec105d0e875a593ee4b4872021c717cf9b30faea083da88f68204d4e18f: Status 404 returned error can't find the container with id 06150ec105d0e875a593ee4b4872021c717cf9b30faea083da88f68204d4e18f Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.339641 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-z8r2k"] Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.361133 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b57ffb-952ln"] Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.528295 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.639125 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-nb\") pod \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.639255 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-config\") pod \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.639390 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-sb\") pod \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.639412 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-swift-storage-0\") pod \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.639467 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jb87\" (UniqueName: \"kubernetes.io/projected/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-kube-api-access-4jb87\") pod \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.639495 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-svc\") pod \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\" (UID: \"6c6d41ae-0a22-4a83-a064-2df2a1ab9709\") " Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.661087 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-kube-api-access-4jb87" (OuterVolumeSpecName: "kube-api-access-4jb87") pod "6c6d41ae-0a22-4a83-a064-2df2a1ab9709" (UID: "6c6d41ae-0a22-4a83-a064-2df2a1ab9709"). InnerVolumeSpecName "kube-api-access-4jb87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.687651 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c6d41ae-0a22-4a83-a064-2df2a1ab9709" (UID: "6c6d41ae-0a22-4a83-a064-2df2a1ab9709"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.689759 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c6d41ae-0a22-4a83-a064-2df2a1ab9709" (UID: "6c6d41ae-0a22-4a83-a064-2df2a1ab9709"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.691689 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c6d41ae-0a22-4a83-a064-2df2a1ab9709" (UID: "6c6d41ae-0a22-4a83-a064-2df2a1ab9709"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.700466 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c6d41ae-0a22-4a83-a064-2df2a1ab9709" (UID: "6c6d41ae-0a22-4a83-a064-2df2a1ab9709"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.700538 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-config" (OuterVolumeSpecName: "config") pod "6c6d41ae-0a22-4a83-a064-2df2a1ab9709" (UID: "6c6d41ae-0a22-4a83-a064-2df2a1ab9709"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.741880 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.741955 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.741973 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jb87\" (UniqueName: \"kubernetes.io/projected/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-kube-api-access-4jb87\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.741987 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.742000 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.742017 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6d41ae-0a22-4a83-a064-2df2a1ab9709-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.875939 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerStarted","Data":"5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.878531 4890 generic.go:334] "Generic (PLEG): container finished" podID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerID="ba20004fef84efb5196818e15a0e66142dea7de3850f21b544cb88a0fc4da4ef" exitCode=0 Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.878608 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" event={"ID":"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130","Type":"ContainerDied","Data":"ba20004fef84efb5196818e15a0e66142dea7de3850f21b544cb88a0fc4da4ef"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.878627 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" event={"ID":"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130","Type":"ContainerStarted","Data":"4c64c8f44536d63ad005a2fe0344b9b0d4501d6add3f6a034d77acc7367cc122"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.888495 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b57ffb-952ln" event={"ID":"e9f074ad-e83e-495d-a1f0-177b0a9ffb85","Type":"ContainerStarted","Data":"d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.888540 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b57ffb-952ln" event={"ID":"e9f074ad-e83e-495d-a1f0-177b0a9ffb85","Type":"ContainerStarted","Data":"70448d07d89f1032b4944cf12740aa50f6ee2b94105d77a827d44a7be5c57ee2"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.895641 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58c356dd-aad0-4de6-bf7e-8d0031f22429","Type":"ContainerStarted","Data":"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.909365 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" event={"ID":"0365c802-8af2-4230-a2e7-90959d273419","Type":"ContainerStarted","Data":"06150ec105d0e875a593ee4b4872021c717cf9b30faea083da88f68204d4e18f"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.924684 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" event={"ID":"6c6d41ae-0a22-4a83-a064-2df2a1ab9709","Type":"ContainerDied","Data":"871d937840f52e8df4a405598ecf9656703fdad44b07d69668559ac22356a163"} Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.924743 4890 scope.go:117] "RemoveContainer" containerID="1f125f4fa65fd179c104723f80db7095fc7a0d4b02427ee165df20c016d42c9a" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.924927 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f5756c4f-fs8gq" Jan 21 15:53:38 crc kubenswrapper[4890]: I0121 15:53:38.934547 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5969dffb49-ng442" event={"ID":"d3466f4b-2d63-490d-bae0-0921a4874daa","Type":"ContainerStarted","Data":"b38b5499e0e0b88d904a91a2076a29815438fe843f3eec5e66a7607c22417f8b"} Jan 21 15:53:39 crc kubenswrapper[4890]: I0121 15:53:39.022482 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-fs8gq"] Jan 21 15:53:39 crc kubenswrapper[4890]: I0121 15:53:39.033695 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9f5756c4f-fs8gq"] Jan 21 15:53:39 crc kubenswrapper[4890]: I0121 15:53:39.380251 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:39 crc kubenswrapper[4890]: I0121 15:53:39.934128 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6d41ae-0a22-4a83-a064-2df2a1ab9709" path="/var/lib/kubelet/pods/6c6d41ae-0a22-4a83-a064-2df2a1ab9709/volumes" Jan 21 15:53:39 crc kubenswrapper[4890]: I0121 15:53:39.945750 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b57ffb-952ln" event={"ID":"e9f074ad-e83e-495d-a1f0-177b0a9ffb85","Type":"ContainerStarted","Data":"52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466"} Jan 21 15:53:39 crc kubenswrapper[4890]: I0121 15:53:39.945962 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:39 crc kubenswrapper[4890]: I0121 15:53:39.995029 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b57ffb-952ln" podStartSLOduration=2.9950122 podStartE2EDuration="2.9950122s" podCreationTimestamp="2026-01-21 15:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:39.990468877 +0000 UTC m=+1302.351911326" watchObservedRunningTime="2026-01-21 15:53:39.9950122 +0000 UTC m=+1302.356454609" Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.960161 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" event={"ID":"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130","Type":"ContainerStarted","Data":"184be0c67756e6f99f1d9e6bec269882a479d8f46ce0ae8cdbd971f416738626"} Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.960451 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.962228 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58c356dd-aad0-4de6-bf7e-8d0031f22429","Type":"ContainerStarted","Data":"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670"} Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.962340 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api-log" containerID="cri-o://a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3" gracePeriod=30 Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.962994 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api" containerID="cri-o://e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670" gracePeriod=30 Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.963051 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.967939 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ee26918-8402-4dfe-a822-90ccc15dcefd","Type":"ContainerStarted","Data":"24ef859a60c45171a22988984e960e99c273538b2a591ad3e12dda8609ac24f5"} Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.968038 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:40 crc kubenswrapper[4890]: I0121 15:53:40.982598 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" podStartSLOduration=3.982545892 podStartE2EDuration="3.982545892s" podCreationTimestamp="2026-01-21 15:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:40.978901811 +0000 UTC m=+1303.340344250" watchObservedRunningTime="2026-01-21 15:53:40.982545892 +0000 UTC m=+1303.343988301" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.004147 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.004125388 podStartE2EDuration="5.004125388s" podCreationTimestamp="2026-01-21 15:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:41.000307823 +0000 UTC m=+1303.361750262" watchObservedRunningTime="2026-01-21 15:53:41.004125388 +0000 UTC m=+1303.365567797" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.781680 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.842429 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data-custom\") pod \"58c356dd-aad0-4de6-bf7e-8d0031f22429\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.842495 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c356dd-aad0-4de6-bf7e-8d0031f22429-logs\") pod \"58c356dd-aad0-4de6-bf7e-8d0031f22429\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.842522 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-combined-ca-bundle\") pod \"58c356dd-aad0-4de6-bf7e-8d0031f22429\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.842549 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-scripts\") pod \"58c356dd-aad0-4de6-bf7e-8d0031f22429\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.842733 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data\") pod \"58c356dd-aad0-4de6-bf7e-8d0031f22429\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.842806 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bdmr\" (UniqueName: \"kubernetes.io/projected/58c356dd-aad0-4de6-bf7e-8d0031f22429-kube-api-access-8bdmr\") pod \"58c356dd-aad0-4de6-bf7e-8d0031f22429\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.842846 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58c356dd-aad0-4de6-bf7e-8d0031f22429-etc-machine-id\") pod \"58c356dd-aad0-4de6-bf7e-8d0031f22429\" (UID: \"58c356dd-aad0-4de6-bf7e-8d0031f22429\") " Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.843450 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58c356dd-aad0-4de6-bf7e-8d0031f22429-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "58c356dd-aad0-4de6-bf7e-8d0031f22429" (UID: "58c356dd-aad0-4de6-bf7e-8d0031f22429"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.847561 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c356dd-aad0-4de6-bf7e-8d0031f22429-logs" (OuterVolumeSpecName: "logs") pod "58c356dd-aad0-4de6-bf7e-8d0031f22429" (UID: "58c356dd-aad0-4de6-bf7e-8d0031f22429"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.848275 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c356dd-aad0-4de6-bf7e-8d0031f22429-kube-api-access-8bdmr" (OuterVolumeSpecName: "kube-api-access-8bdmr") pod "58c356dd-aad0-4de6-bf7e-8d0031f22429" (UID: "58c356dd-aad0-4de6-bf7e-8d0031f22429"). InnerVolumeSpecName "kube-api-access-8bdmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.849871 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-scripts" (OuterVolumeSpecName: "scripts") pod "58c356dd-aad0-4de6-bf7e-8d0031f22429" (UID: "58c356dd-aad0-4de6-bf7e-8d0031f22429"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.856496 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "58c356dd-aad0-4de6-bf7e-8d0031f22429" (UID: "58c356dd-aad0-4de6-bf7e-8d0031f22429"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.878070 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58c356dd-aad0-4de6-bf7e-8d0031f22429" (UID: "58c356dd-aad0-4de6-bf7e-8d0031f22429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.927214 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data" (OuterVolumeSpecName: "config-data") pod "58c356dd-aad0-4de6-bf7e-8d0031f22429" (UID: "58c356dd-aad0-4de6-bf7e-8d0031f22429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.945688 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.945730 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bdmr\" (UniqueName: \"kubernetes.io/projected/58c356dd-aad0-4de6-bf7e-8d0031f22429-kube-api-access-8bdmr\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.945785 4890 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58c356dd-aad0-4de6-bf7e-8d0031f22429-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.945802 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.945814 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c356dd-aad0-4de6-bf7e-8d0031f22429-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.945825 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:41 crc kubenswrapper[4890]: I0121 15:53:41.945836 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c356dd-aad0-4de6-bf7e-8d0031f22429-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.000951 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerStarted","Data":"17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.003202 4890 generic.go:334] "Generic (PLEG): container finished" podID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerID="e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670" exitCode=0 Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.003236 4890 generic.go:334] "Generic (PLEG): container finished" podID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerID="a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3" exitCode=143 Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.003287 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58c356dd-aad0-4de6-bf7e-8d0031f22429","Type":"ContainerDied","Data":"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.003321 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58c356dd-aad0-4de6-bf7e-8d0031f22429","Type":"ContainerDied","Data":"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.003336 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58c356dd-aad0-4de6-bf7e-8d0031f22429","Type":"ContainerDied","Data":"272634da8c7c6e8999aed0ce11b42dca32d70d671b3f657c7a445e2ca2720267"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.003372 4890 scope.go:117] "RemoveContainer" containerID="e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.003523 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.021745 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" event={"ID":"0365c802-8af2-4230-a2e7-90959d273419","Type":"ContainerStarted","Data":"9d30851de1888098b6eefb06ebbe23168f3d78920011b9933b08aff11f05029f"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.021799 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" event={"ID":"0365c802-8af2-4230-a2e7-90959d273419","Type":"ContainerStarted","Data":"c2f49312d4e89e99e690840cb5a943c78b8a717a32ae94d1b8fa6f3f50c660c1"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.036514 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ee26918-8402-4dfe-a822-90ccc15dcefd","Type":"ContainerStarted","Data":"f47f4b211d1d1be5e5bb9da17b9d3819d934f4fdd1eb01865cdf0d3ce89f2d0c"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.042016 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" podStartSLOduration=2.173896875 podStartE2EDuration="5.04199827s" podCreationTimestamp="2026-01-21 15:53:37 +0000 UTC" firstStartedPulling="2026-01-21 15:53:38.132324961 +0000 UTC m=+1300.493767370" lastFinishedPulling="2026-01-21 15:53:41.000426326 +0000 UTC m=+1303.361868765" observedRunningTime="2026-01-21 15:53:42.040464032 +0000 UTC m=+1304.401906461" watchObservedRunningTime="2026-01-21 15:53:42.04199827 +0000 UTC m=+1304.403440669" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.056601 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5969dffb49-ng442" event={"ID":"d3466f4b-2d63-490d-bae0-0921a4874daa","Type":"ContainerStarted","Data":"2ca05563eab7c7837a3f0611a032f1c0a8bc338b86d2e64c4be0a14c487366e0"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.056648 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5969dffb49-ng442" event={"ID":"d3466f4b-2d63-490d-bae0-0921a4874daa","Type":"ContainerStarted","Data":"fec4b0c0a2231fb8d38d939d55a6826e9794606484374bcac4d37face3381fe7"} Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.087093 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.802320623 podStartE2EDuration="6.08707826s" podCreationTimestamp="2026-01-21 15:53:36 +0000 UTC" firstStartedPulling="2026-01-21 15:53:37.109608765 +0000 UTC m=+1299.471051164" lastFinishedPulling="2026-01-21 15:53:38.394366392 +0000 UTC m=+1300.755808801" observedRunningTime="2026-01-21 15:53:42.074570989 +0000 UTC m=+1304.436013418" watchObservedRunningTime="2026-01-21 15:53:42.08707826 +0000 UTC m=+1304.448520669" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.087469 4890 scope.go:117] "RemoveContainer" containerID="a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.097559 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.104532 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.122849 4890 scope.go:117] "RemoveContainer" containerID="e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670" Jan 21 15:53:42 crc kubenswrapper[4890]: E0121 15:53:42.124238 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670\": container with ID starting with e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670 not found: ID does not exist" containerID="e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.124275 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670"} err="failed to get container status \"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670\": rpc error: code = NotFound desc = could not find container \"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670\": container with ID starting with e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670 not found: ID does not exist" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.124299 4890 scope.go:117] "RemoveContainer" containerID="a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3" Jan 21 15:53:42 crc kubenswrapper[4890]: E0121 15:53:42.126629 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3\": container with ID starting with a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3 not found: ID does not exist" containerID="a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.126675 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3"} err="failed to get container status \"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3\": rpc error: code = NotFound desc = could not find container \"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3\": container with ID starting with a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3 not found: ID does not exist" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.126696 4890 scope.go:117] "RemoveContainer" containerID="e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.132470 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670"} err="failed to get container status \"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670\": rpc error: code = NotFound desc = could not find container \"e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670\": container with ID starting with e42bfa1354a771bd1c8385c5c89785cd0d9a47f0960b055b54eddb08739c4670 not found: ID does not exist" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.132510 4890 scope.go:117] "RemoveContainer" containerID="a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.132603 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:42 crc kubenswrapper[4890]: E0121 15:53:42.133025 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.133047 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api" Jan 21 15:53:42 crc kubenswrapper[4890]: E0121 15:53:42.133063 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6d41ae-0a22-4a83-a064-2df2a1ab9709" containerName="init" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.133069 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6d41ae-0a22-4a83-a064-2df2a1ab9709" containerName="init" Jan 21 15:53:42 crc kubenswrapper[4890]: E0121 15:53:42.133082 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api-log" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.133088 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api-log" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.133300 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.133328 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" containerName="cinder-api-log" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.133340 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6d41ae-0a22-4a83-a064-2df2a1ab9709" containerName="init" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.133677 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3"} err="failed to get container status \"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3\": rpc error: code = NotFound desc = could not find container \"a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3\": container with ID starting with a2af9e485a790e26ab6d17ff91cf94434db6461241b6b346939515eae4458ea3 not found: ID does not exist" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.134334 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.137073 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.137433 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.146920 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.182331 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.187203 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5969dffb49-ng442" podStartSLOduration=3.142720509 podStartE2EDuration="6.187176427s" podCreationTimestamp="2026-01-21 15:53:36 +0000 UTC" firstStartedPulling="2026-01-21 15:53:37.951853166 +0000 UTC m=+1300.313295575" lastFinishedPulling="2026-01-21 15:53:40.996309064 +0000 UTC m=+1303.357751493" observedRunningTime="2026-01-21 15:53:42.135418621 +0000 UTC m=+1304.496861040" watchObservedRunningTime="2026-01-21 15:53:42.187176427 +0000 UTC m=+1304.548618836" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.253591 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-public-tls-certs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.253641 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371fefce-bb16-4c48-ac5a-01885e77c090-logs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.253805 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqpmr\" (UniqueName: \"kubernetes.io/projected/371fefce-bb16-4c48-ac5a-01885e77c090-kube-api-access-nqpmr\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.254064 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.254096 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.254213 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.254263 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-scripts\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.254277 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data-custom\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.254296 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/371fefce-bb16-4c48-ac5a-01885e77c090-etc-machine-id\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356424 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356492 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-scripts\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356510 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data-custom\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356530 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/371fefce-bb16-4c48-ac5a-01885e77c090-etc-machine-id\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356565 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-public-tls-certs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356591 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371fefce-bb16-4c48-ac5a-01885e77c090-logs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356617 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqpmr\" (UniqueName: \"kubernetes.io/projected/371fefce-bb16-4c48-ac5a-01885e77c090-kube-api-access-nqpmr\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356675 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.356713 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.358035 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371fefce-bb16-4c48-ac5a-01885e77c090-logs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.358051 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/371fefce-bb16-4c48-ac5a-01885e77c090-etc-machine-id\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.363172 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.363362 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-scripts\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.364211 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-public-tls-certs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.365541 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data-custom\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.366226 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.369501 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.383007 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqpmr\" (UniqueName: \"kubernetes.io/projected/371fefce-bb16-4c48-ac5a-01885e77c090-kube-api-access-nqpmr\") pod \"cinder-api-0\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.489068 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:53:42 crc kubenswrapper[4890]: I0121 15:53:42.957524 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:53:42 crc kubenswrapper[4890]: W0121 15:53:42.965152 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod371fefce_bb16_4c48_ac5a_01885e77c090.slice/crio-678cb303b1c60ecec61a6716ae72881f7f9503ea5e7f72f36da5431eda4ea2c7 WatchSource:0}: Error finding container 678cb303b1c60ecec61a6716ae72881f7f9503ea5e7f72f36da5431eda4ea2c7: Status 404 returned error can't find the container with id 678cb303b1c60ecec61a6716ae72881f7f9503ea5e7f72f36da5431eda4ea2c7 Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.066093 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerStarted","Data":"a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2"} Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.067442 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"371fefce-bb16-4c48-ac5a-01885e77c090","Type":"ContainerStarted","Data":"678cb303b1c60ecec61a6716ae72881f7f9503ea5e7f72f36da5431eda4ea2c7"} Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.657452 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6946c9f5b4-2l82t"] Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.660266 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.663665 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.663834 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.679268 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6946c9f5b4-2l82t"] Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.783881 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-internal-tls-certs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.783928 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhn6\" (UniqueName: \"kubernetes.io/projected/33bbda2a-fde6-466f-92c8-88556941b8a3-kube-api-access-gmhn6\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.783965 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.783988 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33bbda2a-fde6-466f-92c8-88556941b8a3-logs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.784022 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-public-tls-certs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.784051 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data-custom\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.784076 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-combined-ca-bundle\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.885548 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-internal-tls-certs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.885605 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhn6\" (UniqueName: \"kubernetes.io/projected/33bbda2a-fde6-466f-92c8-88556941b8a3-kube-api-access-gmhn6\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.885638 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.885661 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33bbda2a-fde6-466f-92c8-88556941b8a3-logs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.885705 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-public-tls-certs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.885749 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data-custom\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.885774 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-combined-ca-bundle\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.886985 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33bbda2a-fde6-466f-92c8-88556941b8a3-logs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.890170 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-internal-tls-certs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.890567 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-public-tls-certs\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.890769 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.891042 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-combined-ca-bundle\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.893491 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data-custom\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.908330 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmhn6\" (UniqueName: \"kubernetes.io/projected/33bbda2a-fde6-466f-92c8-88556941b8a3-kube-api-access-gmhn6\") pod \"barbican-api-6946c9f5b4-2l82t\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.934198 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c356dd-aad0-4de6-bf7e-8d0031f22429" path="/var/lib/kubelet/pods/58c356dd-aad0-4de6-bf7e-8d0031f22429/volumes" Jan 21 15:53:43 crc kubenswrapper[4890]: I0121 15:53:43.987896 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:44 crc kubenswrapper[4890]: I0121 15:53:44.104738 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"371fefce-bb16-4c48-ac5a-01885e77c090","Type":"ContainerStarted","Data":"766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7"} Jan 21 15:53:44 crc kubenswrapper[4890]: I0121 15:53:44.119173 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerStarted","Data":"d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e"} Jan 21 15:53:44 crc kubenswrapper[4890]: I0121 15:53:44.119335 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:53:44 crc kubenswrapper[4890]: I0121 15:53:44.152919 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6489682500000002 podStartE2EDuration="8.152894468s" podCreationTimestamp="2026-01-21 15:53:36 +0000 UTC" firstStartedPulling="2026-01-21 15:53:37.648564219 +0000 UTC m=+1300.010006638" lastFinishedPulling="2026-01-21 15:53:43.152490457 +0000 UTC m=+1305.513932856" observedRunningTime="2026-01-21 15:53:44.14087555 +0000 UTC m=+1306.502317959" watchObservedRunningTime="2026-01-21 15:53:44.152894468 +0000 UTC m=+1306.514336887" Jan 21 15:53:44 crc kubenswrapper[4890]: W0121 15:53:44.457066 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33bbda2a_fde6_466f_92c8_88556941b8a3.slice/crio-b43159309c1ae62ac35c149df1b31a33a3fb89155628ce87bc30f11a30db813b WatchSource:0}: Error finding container b43159309c1ae62ac35c149df1b31a33a3fb89155628ce87bc30f11a30db813b: Status 404 returned error can't find the container with id b43159309c1ae62ac35c149df1b31a33a3fb89155628ce87bc30f11a30db813b Jan 21 15:53:44 crc kubenswrapper[4890]: I0121 15:53:44.469122 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6946c9f5b4-2l82t"] Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.143172 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6946c9f5b4-2l82t" event={"ID":"33bbda2a-fde6-466f-92c8-88556941b8a3","Type":"ContainerStarted","Data":"9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091"} Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.143496 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6946c9f5b4-2l82t" event={"ID":"33bbda2a-fde6-466f-92c8-88556941b8a3","Type":"ContainerStarted","Data":"6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333"} Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.143510 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6946c9f5b4-2l82t" event={"ID":"33bbda2a-fde6-466f-92c8-88556941b8a3","Type":"ContainerStarted","Data":"b43159309c1ae62ac35c149df1b31a33a3fb89155628ce87bc30f11a30db813b"} Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.143544 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.143564 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.146069 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"371fefce-bb16-4c48-ac5a-01885e77c090","Type":"ContainerStarted","Data":"7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f"} Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.186303 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6946c9f5b4-2l82t" podStartSLOduration=2.186282219 podStartE2EDuration="2.186282219s" podCreationTimestamp="2026-01-21 15:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:45.16982913 +0000 UTC m=+1307.531271529" watchObservedRunningTime="2026-01-21 15:53:45.186282219 +0000 UTC m=+1307.547724638" Jan 21 15:53:45 crc kubenswrapper[4890]: I0121 15:53:45.206999 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.206979934 podStartE2EDuration="3.206979934s" podCreationTimestamp="2026-01-21 15:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:45.190974376 +0000 UTC m=+1307.552416775" watchObservedRunningTime="2026-01-21 15:53:45.206979934 +0000 UTC m=+1307.568422343" Jan 21 15:53:46 crc kubenswrapper[4890]: I0121 15:53:46.156539 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 15:53:46 crc kubenswrapper[4890]: I0121 15:53:46.499711 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 15:53:46 crc kubenswrapper[4890]: I0121 15:53:46.693228 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 15:53:47 crc kubenswrapper[4890]: I0121 15:53:47.227493 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:47 crc kubenswrapper[4890]: I0121 15:53:47.649696 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:53:47 crc kubenswrapper[4890]: I0121 15:53:47.727741 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-cbzsv"] Jan 21 15:53:47 crc kubenswrapper[4890]: I0121 15:53:47.728019 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" podUID="53e0153f-134a-480a-9612-c0cd342594c5" containerName="dnsmasq-dns" containerID="cri-o://f7af32f3b549ff9c597f62cbdae56ad477fa8bc3a6f8183f6ee62dcfb55b8bba" gracePeriod=10 Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.184743 4890 generic.go:334] "Generic (PLEG): container finished" podID="53e0153f-134a-480a-9612-c0cd342594c5" containerID="f7af32f3b549ff9c597f62cbdae56ad477fa8bc3a6f8183f6ee62dcfb55b8bba" exitCode=0 Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.185275 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="cinder-scheduler" containerID="cri-o://24ef859a60c45171a22988984e960e99c273538b2a591ad3e12dda8609ac24f5" gracePeriod=30 Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.184830 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" event={"ID":"53e0153f-134a-480a-9612-c0cd342594c5","Type":"ContainerDied","Data":"f7af32f3b549ff9c597f62cbdae56ad477fa8bc3a6f8183f6ee62dcfb55b8bba"} Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.185426 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" event={"ID":"53e0153f-134a-480a-9612-c0cd342594c5","Type":"ContainerDied","Data":"65469e8ed8aafa25071a79c3d72d8b3c158c31214c3568cd3f732676d01c7504"} Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.185446 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65469e8ed8aafa25071a79c3d72d8b3c158c31214c3568cd3f732676d01c7504" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.185846 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="probe" containerID="cri-o://f47f4b211d1d1be5e5bb9da17b9d3819d934f4fdd1eb01865cdf0d3ce89f2d0c" gracePeriod=30 Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.243418 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.394711 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-svc\") pod \"53e0153f-134a-480a-9612-c0cd342594c5\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.394753 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjzf7\" (UniqueName: \"kubernetes.io/projected/53e0153f-134a-480a-9612-c0cd342594c5-kube-api-access-bjzf7\") pod \"53e0153f-134a-480a-9612-c0cd342594c5\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.394859 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-swift-storage-0\") pod \"53e0153f-134a-480a-9612-c0cd342594c5\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.394897 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-config\") pod \"53e0153f-134a-480a-9612-c0cd342594c5\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.394934 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-nb\") pod \"53e0153f-134a-480a-9612-c0cd342594c5\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.395109 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-sb\") pod \"53e0153f-134a-480a-9612-c0cd342594c5\" (UID: \"53e0153f-134a-480a-9612-c0cd342594c5\") " Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.417615 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e0153f-134a-480a-9612-c0cd342594c5-kube-api-access-bjzf7" (OuterVolumeSpecName: "kube-api-access-bjzf7") pod "53e0153f-134a-480a-9612-c0cd342594c5" (UID: "53e0153f-134a-480a-9612-c0cd342594c5"). InnerVolumeSpecName "kube-api-access-bjzf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.451950 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53e0153f-134a-480a-9612-c0cd342594c5" (UID: "53e0153f-134a-480a-9612-c0cd342594c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.456256 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "53e0153f-134a-480a-9612-c0cd342594c5" (UID: "53e0153f-134a-480a-9612-c0cd342594c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.472540 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-config" (OuterVolumeSpecName: "config") pod "53e0153f-134a-480a-9612-c0cd342594c5" (UID: "53e0153f-134a-480a-9612-c0cd342594c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.490796 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53e0153f-134a-480a-9612-c0cd342594c5" (UID: "53e0153f-134a-480a-9612-c0cd342594c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.492956 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53e0153f-134a-480a-9612-c0cd342594c5" (UID: "53e0153f-134a-480a-9612-c0cd342594c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.498331 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.498422 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjzf7\" (UniqueName: \"kubernetes.io/projected/53e0153f-134a-480a-9612-c0cd342594c5-kube-api-access-bjzf7\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.498441 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.498451 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.498462 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:48 crc kubenswrapper[4890]: I0121 15:53:48.498473 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53e0153f-134a-480a-9612-c0cd342594c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.194646 4890 generic.go:334] "Generic (PLEG): container finished" podID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerID="f47f4b211d1d1be5e5bb9da17b9d3819d934f4fdd1eb01865cdf0d3ce89f2d0c" exitCode=0 Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.194757 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-cbzsv" Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.194822 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ee26918-8402-4dfe-a822-90ccc15dcefd","Type":"ContainerDied","Data":"f47f4b211d1d1be5e5bb9da17b9d3819d934f4fdd1eb01865cdf0d3ce89f2d0c"} Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.229959 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-cbzsv"] Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.237645 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-cbzsv"] Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.309009 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.402295 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:53:49 crc kubenswrapper[4890]: I0121 15:53:49.925587 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e0153f-134a-480a-9612-c0cd342594c5" path="/var/lib/kubelet/pods/53e0153f-134a-480a-9612-c0cd342594c5/volumes" Jan 21 15:53:51 crc kubenswrapper[4890]: I0121 15:53:51.256935 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:53:51 crc kubenswrapper[4890]: I0121 15:53:51.351720 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:51 crc kubenswrapper[4890]: I0121 15:53:51.357270 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:53:51 crc kubenswrapper[4890]: I0121 15:53:51.940536 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.259757 4890 generic.go:334] "Generic (PLEG): container finished" podID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerID="24ef859a60c45171a22988984e960e99c273538b2a591ad3e12dda8609ac24f5" exitCode=0 Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.261687 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ee26918-8402-4dfe-a822-90ccc15dcefd","Type":"ContainerDied","Data":"24ef859a60c45171a22988984e960e99c273538b2a591ad3e12dda8609ac24f5"} Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.756600 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.784792 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv96k\" (UniqueName: \"kubernetes.io/projected/1ee26918-8402-4dfe-a822-90ccc15dcefd-kube-api-access-qv96k\") pod \"1ee26918-8402-4dfe-a822-90ccc15dcefd\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.784866 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data-custom\") pod \"1ee26918-8402-4dfe-a822-90ccc15dcefd\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.784930 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-combined-ca-bundle\") pod \"1ee26918-8402-4dfe-a822-90ccc15dcefd\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.784956 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data\") pod \"1ee26918-8402-4dfe-a822-90ccc15dcefd\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.785068 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee26918-8402-4dfe-a822-90ccc15dcefd-etc-machine-id\") pod \"1ee26918-8402-4dfe-a822-90ccc15dcefd\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.785102 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-scripts\") pod \"1ee26918-8402-4dfe-a822-90ccc15dcefd\" (UID: \"1ee26918-8402-4dfe-a822-90ccc15dcefd\") " Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.785468 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ee26918-8402-4dfe-a822-90ccc15dcefd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1ee26918-8402-4dfe-a822-90ccc15dcefd" (UID: "1ee26918-8402-4dfe-a822-90ccc15dcefd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.798709 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee26918-8402-4dfe-a822-90ccc15dcefd-kube-api-access-qv96k" (OuterVolumeSpecName: "kube-api-access-qv96k") pod "1ee26918-8402-4dfe-a822-90ccc15dcefd" (UID: "1ee26918-8402-4dfe-a822-90ccc15dcefd"). InnerVolumeSpecName "kube-api-access-qv96k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.800043 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-scripts" (OuterVolumeSpecName: "scripts") pod "1ee26918-8402-4dfe-a822-90ccc15dcefd" (UID: "1ee26918-8402-4dfe-a822-90ccc15dcefd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.803515 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1ee26918-8402-4dfe-a822-90ccc15dcefd" (UID: "1ee26918-8402-4dfe-a822-90ccc15dcefd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.845616 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ee26918-8402-4dfe-a822-90ccc15dcefd" (UID: "1ee26918-8402-4dfe-a822-90ccc15dcefd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.887382 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv96k\" (UniqueName: \"kubernetes.io/projected/1ee26918-8402-4dfe-a822-90ccc15dcefd-kube-api-access-qv96k\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.887578 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.887594 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.887603 4890 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee26918-8402-4dfe-a822-90ccc15dcefd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.887633 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.953827 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data" (OuterVolumeSpecName: "config-data") pod "1ee26918-8402-4dfe-a822-90ccc15dcefd" (UID: "1ee26918-8402-4dfe-a822-90ccc15dcefd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4890]: I0121 15:53:52.988891 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee26918-8402-4dfe-a822-90ccc15dcefd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.270051 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ee26918-8402-4dfe-a822-90ccc15dcefd","Type":"ContainerDied","Data":"be18516abdf549bad047557a50fda9bdb989c2049d4d8505e4d7bb2a4fee8f02"} Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.270103 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.270135 4890 scope.go:117] "RemoveContainer" containerID="f47f4b211d1d1be5e5bb9da17b9d3819d934f4fdd1eb01865cdf0d3ce89f2d0c" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.297806 4890 scope.go:117] "RemoveContainer" containerID="24ef859a60c45171a22988984e960e99c273538b2a591ad3e12dda8609ac24f5" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.314700 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.324778 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.347337 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:53 crc kubenswrapper[4890]: E0121 15:53:53.347880 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="cinder-scheduler" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.347902 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="cinder-scheduler" Jan 21 15:53:53 crc kubenswrapper[4890]: E0121 15:53:53.347923 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e0153f-134a-480a-9612-c0cd342594c5" containerName="dnsmasq-dns" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.347933 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e0153f-134a-480a-9612-c0cd342594c5" containerName="dnsmasq-dns" Jan 21 15:53:53 crc kubenswrapper[4890]: E0121 15:53:53.347950 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e0153f-134a-480a-9612-c0cd342594c5" containerName="init" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.347959 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e0153f-134a-480a-9612-c0cd342594c5" containerName="init" Jan 21 15:53:53 crc kubenswrapper[4890]: E0121 15:53:53.347975 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="probe" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.347981 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="probe" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.348214 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="cinder-scheduler" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.348233 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" containerName="probe" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.348262 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e0153f-134a-480a-9612-c0cd342594c5" containerName="dnsmasq-dns" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.349682 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.351683 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.358132 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.395498 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1463d4e1-9ed2-4f45-b473-a94d18a4156f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.395551 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.395591 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.395623 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-scripts\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.395668 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjp2f\" (UniqueName: \"kubernetes.io/projected/1463d4e1-9ed2-4f45-b473-a94d18a4156f-kube-api-access-cjp2f\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.395762 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.497233 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1463d4e1-9ed2-4f45-b473-a94d18a4156f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.497279 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.497311 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.497331 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-scripts\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.497378 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjp2f\" (UniqueName: \"kubernetes.io/projected/1463d4e1-9ed2-4f45-b473-a94d18a4156f-kube-api-access-cjp2f\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.497398 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1463d4e1-9ed2-4f45-b473-a94d18a4156f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.497503 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.501958 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.511078 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.511461 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-scripts\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.517387 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.526984 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjp2f\" (UniqueName: \"kubernetes.io/projected/1463d4e1-9ed2-4f45-b473-a94d18a4156f-kube-api-access-cjp2f\") pod \"cinder-scheduler-0\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.672193 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:53:53 crc kubenswrapper[4890]: I0121 15:53:53.924776 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee26918-8402-4dfe-a822-90ccc15dcefd" path="/var/lib/kubelet/pods/1ee26918-8402-4dfe-a822-90ccc15dcefd/volumes" Jan 21 15:53:54 crc kubenswrapper[4890]: I0121 15:53:54.157695 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:53:54 crc kubenswrapper[4890]: W0121 15:53:54.158969 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1463d4e1_9ed2_4f45_b473_a94d18a4156f.slice/crio-43a4af8e5cde4fc445ec725b05d317b2194127514557a6317245a2e736905b36 WatchSource:0}: Error finding container 43a4af8e5cde4fc445ec725b05d317b2194127514557a6317245a2e736905b36: Status 404 returned error can't find the container with id 43a4af8e5cde4fc445ec725b05d317b2194127514557a6317245a2e736905b36 Jan 21 15:53:54 crc kubenswrapper[4890]: I0121 15:53:54.281157 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1463d4e1-9ed2-4f45-b473-a94d18a4156f","Type":"ContainerStarted","Data":"43a4af8e5cde4fc445ec725b05d317b2194127514557a6317245a2e736905b36"} Jan 21 15:53:54 crc kubenswrapper[4890]: I0121 15:53:54.424404 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:53:54 crc kubenswrapper[4890]: I0121 15:53:54.502143 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bc6b59f74-wjlfx"] Jan 21 15:53:54 crc kubenswrapper[4890]: I0121 15:53:54.502637 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bc6b59f74-wjlfx" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-api" containerID="cri-o://0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48" gracePeriod=30 Jan 21 15:53:54 crc kubenswrapper[4890]: I0121 15:53:54.503084 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bc6b59f74-wjlfx" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-httpd" containerID="cri-o://e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f" gracePeriod=30 Jan 21 15:53:54 crc kubenswrapper[4890]: I0121 15:53:54.709459 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 15:53:55 crc kubenswrapper[4890]: I0121 15:53:55.303995 4890 generic.go:334] "Generic (PLEG): container finished" podID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerID="e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f" exitCode=0 Jan 21 15:53:55 crc kubenswrapper[4890]: I0121 15:53:55.304402 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc6b59f74-wjlfx" event={"ID":"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8","Type":"ContainerDied","Data":"e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f"} Jan 21 15:53:55 crc kubenswrapper[4890]: I0121 15:53:55.309549 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1463d4e1-9ed2-4f45-b473-a94d18a4156f","Type":"ContainerStarted","Data":"17017fb4db752be398957128e72379f1e6bbd55f2c985855c266996c3fbae23f"} Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.137764 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.217942 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.281138 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b57ffb-952ln"] Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.281428 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b57ffb-952ln" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api-log" containerID="cri-o://d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581" gracePeriod=30 Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.281916 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b57ffb-952ln" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api" containerID="cri-o://52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466" gracePeriod=30 Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.344747 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1463d4e1-9ed2-4f45-b473-a94d18a4156f","Type":"ContainerStarted","Data":"ac358f25d3bc11ecfd3d8286ee71238981958d5ba551cfdc752cc98b87178c26"} Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.373260 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.373245366 podStartE2EDuration="3.373245366s" podCreationTimestamp="2026-01-21 15:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:56.371631166 +0000 UTC m=+1318.733073575" watchObservedRunningTime="2026-01-21 15:53:56.373245366 +0000 UTC m=+1318.734687775" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.578534 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.580132 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.585722 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.586990 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.587149 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.587325 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cvfc9" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.661549 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.661639 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.661693 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config-secret\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.661793 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6zsb\" (UniqueName: \"kubernetes.io/projected/defb5f2d-053c-4b32-beb1-d10d70bacce1-kube-api-access-t6zsb\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.763277 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.763666 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.763711 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config-secret\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.763801 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6zsb\" (UniqueName: \"kubernetes.io/projected/defb5f2d-053c-4b32-beb1-d10d70bacce1-kube-api-access-t6zsb\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.764175 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.769274 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.780002 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config-secret\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.787860 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6zsb\" (UniqueName: \"kubernetes.io/projected/defb5f2d-053c-4b32-beb1-d10d70bacce1-kube-api-access-t6zsb\") pod \"openstackclient\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " pod="openstack/openstackclient" Jan 21 15:53:56 crc kubenswrapper[4890]: I0121 15:53:56.913113 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:53:57 crc kubenswrapper[4890]: I0121 15:53:57.390369 4890 generic.go:334] "Generic (PLEG): container finished" podID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerID="d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581" exitCode=143 Jan 21 15:53:57 crc kubenswrapper[4890]: I0121 15:53:57.390569 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b57ffb-952ln" event={"ID":"e9f074ad-e83e-495d-a1f0-177b0a9ffb85","Type":"ContainerDied","Data":"d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581"} Jan 21 15:53:57 crc kubenswrapper[4890]: I0121 15:53:57.569811 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.400508 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"defb5f2d-053c-4b32-beb1-d10d70bacce1","Type":"ContainerStarted","Data":"e5bc1e16313f2cd9c4e1412f49455d4fd4f812e83be700056c83cfda66d7c292"} Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.672604 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.758323 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-64d44774fc-92wps"] Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.759775 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.769454 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.769657 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.769742 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.781711 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64d44774fc-92wps"] Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.802947 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-config-data\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.803039 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-public-tls-certs\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.803099 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-log-httpd\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.803165 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-run-httpd\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.803194 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-etc-swift\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.803250 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6qfr\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-kube-api-access-g6qfr\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.803304 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-combined-ca-bundle\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.803363 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-internal-tls-certs\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.904969 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-run-httpd\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.905278 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-etc-swift\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.905504 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-run-httpd\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.905519 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6qfr\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-kube-api-access-g6qfr\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.905671 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-combined-ca-bundle\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.905703 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-internal-tls-certs\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.905819 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-config-data\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.905946 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-public-tls-certs\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.906029 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-log-httpd\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.906443 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-log-httpd\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.911045 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-internal-tls-certs\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.912275 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-combined-ca-bundle\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.912281 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-config-data\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.914049 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-public-tls-certs\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.914308 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-etc-swift\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:58 crc kubenswrapper[4890]: I0121 15:53:58.934454 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6qfr\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-kube-api-access-g6qfr\") pod \"swift-proxy-64d44774fc-92wps\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.092320 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.672742 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.673367 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-central-agent" containerID="cri-o://5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add" gracePeriod=30 Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.673414 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="sg-core" containerID="cri-o://a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2" gracePeriod=30 Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.673503 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="proxy-httpd" containerID="cri-o://d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e" gracePeriod=30 Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.673531 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-notification-agent" containerID="cri-o://17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9" gracePeriod=30 Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.692081 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.762563 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64d44774fc-92wps"] Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.775540 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b57ffb-952ln" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:46150->10.217.0.165:9311: read: connection reset by peer" Jan 21 15:53:59 crc kubenswrapper[4890]: I0121 15:53:59.775540 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b57ffb-952ln" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:46148->10.217.0.165:9311: read: connection reset by peer" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.263428 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.341171 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data-custom\") pod \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.341236 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data\") pod \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.341301 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbcsl\" (UniqueName: \"kubernetes.io/projected/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-kube-api-access-zbcsl\") pod \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.341398 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-logs\") pod \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.341438 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-combined-ca-bundle\") pod \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\" (UID: \"e9f074ad-e83e-495d-a1f0-177b0a9ffb85\") " Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.346506 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e9f074ad-e83e-495d-a1f0-177b0a9ffb85" (UID: "e9f074ad-e83e-495d-a1f0-177b0a9ffb85"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.347170 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-logs" (OuterVolumeSpecName: "logs") pod "e9f074ad-e83e-495d-a1f0-177b0a9ffb85" (UID: "e9f074ad-e83e-495d-a1f0-177b0a9ffb85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.350162 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-kube-api-access-zbcsl" (OuterVolumeSpecName: "kube-api-access-zbcsl") pod "e9f074ad-e83e-495d-a1f0-177b0a9ffb85" (UID: "e9f074ad-e83e-495d-a1f0-177b0a9ffb85"). InnerVolumeSpecName "kube-api-access-zbcsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.384450 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9f074ad-e83e-495d-a1f0-177b0a9ffb85" (UID: "e9f074ad-e83e-495d-a1f0-177b0a9ffb85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.403482 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data" (OuterVolumeSpecName: "config-data") pod "e9f074ad-e83e-495d-a1f0-177b0a9ffb85" (UID: "e9f074ad-e83e-495d-a1f0-177b0a9ffb85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.421494 4890 generic.go:334] "Generic (PLEG): container finished" podID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerID="52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466" exitCode=0 Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.421569 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b57ffb-952ln" event={"ID":"e9f074ad-e83e-495d-a1f0-177b0a9ffb85","Type":"ContainerDied","Data":"52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.421601 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b57ffb-952ln" event={"ID":"e9f074ad-e83e-495d-a1f0-177b0a9ffb85","Type":"ContainerDied","Data":"70448d07d89f1032b4944cf12740aa50f6ee2b94105d77a827d44a7be5c57ee2"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.421622 4890 scope.go:117] "RemoveContainer" containerID="52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.421747 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b57ffb-952ln" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.428977 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d44774fc-92wps" event={"ID":"d009f76d-bc65-453c-a05f-29454314ab7a","Type":"ContainerStarted","Data":"4a223e2232d09a7902cafe5997f0744b43b30ca16b7805665ca1778aa131272b"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.429011 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d44774fc-92wps" event={"ID":"d009f76d-bc65-453c-a05f-29454314ab7a","Type":"ContainerStarted","Data":"900fd7400b1809198eec2f87e30f7758ce7b4277e57b7f022112ff25e938935f"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.429021 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d44774fc-92wps" event={"ID":"d009f76d-bc65-453c-a05f-29454314ab7a","Type":"ContainerStarted","Data":"042023fed31004470a13edc002d94f939e412a6bca85e4a079554400f7f2fce7"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.429660 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.429683 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.435117 4890 generic.go:334] "Generic (PLEG): container finished" podID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerID="d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e" exitCode=0 Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.435146 4890 generic.go:334] "Generic (PLEG): container finished" podID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerID="a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2" exitCode=2 Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.435156 4890 generic.go:334] "Generic (PLEG): container finished" podID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerID="5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add" exitCode=0 Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.435175 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerDied","Data":"d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.435193 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerDied","Data":"a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.435206 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerDied","Data":"5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add"} Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.446094 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.446120 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.446129 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbcsl\" (UniqueName: \"kubernetes.io/projected/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-kube-api-access-zbcsl\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.446139 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.446149 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f074ad-e83e-495d-a1f0-177b0a9ffb85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.461610 4890 scope.go:117] "RemoveContainer" containerID="d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.497631 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-64d44774fc-92wps" podStartSLOduration=2.4976067410000002 podStartE2EDuration="2.497606741s" podCreationTimestamp="2026-01-21 15:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:54:00.472399945 +0000 UTC m=+1322.833842354" watchObservedRunningTime="2026-01-21 15:54:00.497606741 +0000 UTC m=+1322.859049160" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.505405 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b57ffb-952ln"] Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.513035 4890 scope.go:117] "RemoveContainer" containerID="52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.515028 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b57ffb-952ln"] Jan 21 15:54:00 crc kubenswrapper[4890]: E0121 15:54:00.524628 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466\": container with ID starting with 52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466 not found: ID does not exist" containerID="52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.524704 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466"} err="failed to get container status \"52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466\": rpc error: code = NotFound desc = could not find container \"52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466\": container with ID starting with 52320532fe81102dcbd9be9765410cd80ab33fc8301b63fdcc3fbe5490a2a466 not found: ID does not exist" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.524734 4890 scope.go:117] "RemoveContainer" containerID="d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581" Jan 21 15:54:00 crc kubenswrapper[4890]: E0121 15:54:00.525243 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581\": container with ID starting with d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581 not found: ID does not exist" containerID="d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581" Jan 21 15:54:00 crc kubenswrapper[4890]: I0121 15:54:00.525283 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581"} err="failed to get container status \"d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581\": rpc error: code = NotFound desc = could not find container \"d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581\": container with ID starting with d94151aa8f67cda961a6820658642343550f64c3defd179e0c9c431688ac1581 not found: ID does not exist" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.330553 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.367071 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-httpd-config\") pod \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.367159 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-combined-ca-bundle\") pod \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.367210 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9cv7\" (UniqueName: \"kubernetes.io/projected/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-kube-api-access-z9cv7\") pod \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.367299 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-config\") pod \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.367384 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-ovndb-tls-certs\") pod \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\" (UID: \"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8\") " Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.374395 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" (UID: "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.389731 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-kube-api-access-z9cv7" (OuterVolumeSpecName: "kube-api-access-z9cv7") pod "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" (UID: "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8"). InnerVolumeSpecName "kube-api-access-z9cv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.449979 4890 generic.go:334] "Generic (PLEG): container finished" podID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerID="0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48" exitCode=0 Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.450056 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc6b59f74-wjlfx" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.450063 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc6b59f74-wjlfx" event={"ID":"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8","Type":"ContainerDied","Data":"0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48"} Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.450093 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc6b59f74-wjlfx" event={"ID":"4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8","Type":"ContainerDied","Data":"43ad9ca3720a90c93940cfedc082803f78f521a89371ac3d3bfc4383a0373797"} Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.450112 4890 scope.go:117] "RemoveContainer" containerID="e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.469648 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.469682 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9cv7\" (UniqueName: \"kubernetes.io/projected/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-kube-api-access-z9cv7\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.473180 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" (UID: "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.480461 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-config" (OuterVolumeSpecName: "config") pod "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" (UID: "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.498516 4890 scope.go:117] "RemoveContainer" containerID="0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.543508 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" (UID: "4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.572527 4890 scope.go:117] "RemoveContainer" containerID="e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f" Jan 21 15:54:01 crc kubenswrapper[4890]: E0121 15:54:01.573259 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f\": container with ID starting with e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f not found: ID does not exist" containerID="e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.573294 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f"} err="failed to get container status \"e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f\": rpc error: code = NotFound desc = could not find container \"e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f\": container with ID starting with e3cbb9287f05846f00148b546515083e28df81f45a3b36a97d849f67fd1d294f not found: ID does not exist" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.573322 4890 scope.go:117] "RemoveContainer" containerID="0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48" Jan 21 15:54:01 crc kubenswrapper[4890]: E0121 15:54:01.573600 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48\": container with ID starting with 0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48 not found: ID does not exist" containerID="0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.573636 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48"} err="failed to get container status \"0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48\": rpc error: code = NotFound desc = could not find container \"0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48\": container with ID starting with 0d007b8033b07b7b97ac609b6f9885e5b743776a0c7e0aa9cff171e91b37af48 not found: ID does not exist" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.574097 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.574181 4890 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.574238 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.788145 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bc6b59f74-wjlfx"] Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.797851 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7bc6b59f74-wjlfx"] Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.936691 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" path="/var/lib/kubelet/pods/4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8/volumes" Jan 21 15:54:01 crc kubenswrapper[4890]: I0121 15:54:01.937749 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" path="/var/lib/kubelet/pods/e9f074ad-e83e-495d-a1f0-177b0a9ffb85/volumes" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.273866 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.305702 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-run-httpd\") pod \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.305856 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-sg-core-conf-yaml\") pod \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.305888 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-log-httpd\") pod \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.306162 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" (UID: "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.306495 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" (UID: "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.307329 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-scripts\") pod \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.307462 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fwn6\" (UniqueName: \"kubernetes.io/projected/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-kube-api-access-2fwn6\") pod \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.307482 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-config-data\") pod \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.307558 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-combined-ca-bundle\") pod \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\" (UID: \"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786\") " Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.308180 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.308199 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.324865 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-scripts" (OuterVolumeSpecName: "scripts") pod "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" (UID: "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.326997 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-kube-api-access-2fwn6" (OuterVolumeSpecName: "kube-api-access-2fwn6") pod "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" (UID: "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786"). InnerVolumeSpecName "kube-api-access-2fwn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.356638 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" (UID: "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.405426 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" (UID: "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.411616 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.411702 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fwn6\" (UniqueName: \"kubernetes.io/projected/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-kube-api-access-2fwn6\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.411759 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.411811 4890 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.426677 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-config-data" (OuterVolumeSpecName: "config-data") pod "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" (UID: "d89dc199-2b6a-4d15-a8f2-ccdc8ed32786"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.474471 4890 generic.go:334] "Generic (PLEG): container finished" podID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerID="17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9" exitCode=0 Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.474510 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerDied","Data":"17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9"} Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.474535 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d89dc199-2b6a-4d15-a8f2-ccdc8ed32786","Type":"ContainerDied","Data":"72fa0a40efd47ed30c6ebd0094e93c847fe8f502aaf5bb204883dc9dc2400f8f"} Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.474540 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.474553 4890 scope.go:117] "RemoveContainer" containerID="d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.514652 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.517585 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.541076 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.552730 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553258 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-central-agent" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553271 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-central-agent" Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553283 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553289 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api" Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553320 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api-log" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553326 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api-log" Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553340 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="sg-core" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553373 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="sg-core" Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553384 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-notification-agent" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553390 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-notification-agent" Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553412 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-httpd" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553419 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-httpd" Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553430 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="proxy-httpd" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553455 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="proxy-httpd" Jan 21 15:54:03 crc kubenswrapper[4890]: E0121 15:54:03.553463 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-api" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553469 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-api" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553665 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="proxy-httpd" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553788 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-httpd" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553801 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553811 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-central-agent" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553822 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f074ad-e83e-495d-a1f0-177b0a9ffb85" containerName="barbican-api-log" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553834 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="sg-core" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553842 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7a9d2f-f167-4b8a-a107-c1264ba6c4b8" containerName="neutron-api" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.553855 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" containerName="ceilometer-notification-agent" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.555345 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.558217 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.558526 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.559141 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.615983 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-run-httpd\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.616381 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-scripts\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.616420 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-config-data\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.616493 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-log-httpd\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.616542 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.616564 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zww\" (UniqueName: \"kubernetes.io/projected/7882bed4-6915-422f-a653-c6f841363752-kube-api-access-p7zww\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.616590 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.717852 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-run-httpd\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.717913 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-scripts\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.717934 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-config-data\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.717969 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-log-httpd\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.718001 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.718019 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7zww\" (UniqueName: \"kubernetes.io/projected/7882bed4-6915-422f-a653-c6f841363752-kube-api-access-p7zww\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.718037 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.718391 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-run-httpd\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.718656 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-log-httpd\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.723109 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.725258 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.726236 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-config-data\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.727556 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-scripts\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.736942 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7zww\" (UniqueName: \"kubernetes.io/projected/7882bed4-6915-422f-a653-c6f841363752-kube-api-access-p7zww\") pod \"ceilometer-0\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.872068 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.925573 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89dc199-2b6a-4d15-a8f2-ccdc8ed32786" path="/var/lib/kubelet/pods/d89dc199-2b6a-4d15-a8f2-ccdc8ed32786/volumes" Jan 21 15:54:03 crc kubenswrapper[4890]: I0121 15:54:03.926598 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 15:54:07 crc kubenswrapper[4890]: I0121 15:54:07.363948 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.103305 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.103685 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.238611 4890 scope.go:117] "RemoveContainer" containerID="a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.297495 4890 scope.go:117] "RemoveContainer" containerID="17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.475770 4890 scope.go:117] "RemoveContainer" containerID="5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.509077 4890 scope.go:117] "RemoveContainer" containerID="d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e" Jan 21 15:54:09 crc kubenswrapper[4890]: E0121 15:54:09.509598 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e\": container with ID starting with d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e not found: ID does not exist" containerID="d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.509658 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e"} err="failed to get container status \"d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e\": rpc error: code = NotFound desc = could not find container \"d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e\": container with ID starting with d9b297f8ce5fcbeb8541909f00cfb230f22f4be6dd84fd805b54a9305f24234e not found: ID does not exist" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.509691 4890 scope.go:117] "RemoveContainer" containerID="a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2" Jan 21 15:54:09 crc kubenswrapper[4890]: E0121 15:54:09.513729 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2\": container with ID starting with a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2 not found: ID does not exist" containerID="a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.514067 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2"} err="failed to get container status \"a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2\": rpc error: code = NotFound desc = could not find container \"a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2\": container with ID starting with a9161690703ff548b5f150d6cc45b056668dad8016d11bb9f48efcc528443ec2 not found: ID does not exist" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.514103 4890 scope.go:117] "RemoveContainer" containerID="17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9" Jan 21 15:54:09 crc kubenswrapper[4890]: E0121 15:54:09.514940 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9\": container with ID starting with 17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9 not found: ID does not exist" containerID="17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.514977 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9"} err="failed to get container status \"17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9\": rpc error: code = NotFound desc = could not find container \"17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9\": container with ID starting with 17e3c3fa330470ca9b974ef348e16c449cb5c7f7b04f27abf6642f41530db8a9 not found: ID does not exist" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.515023 4890 scope.go:117] "RemoveContainer" containerID="5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add" Jan 21 15:54:09 crc kubenswrapper[4890]: E0121 15:54:09.515570 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add\": container with ID starting with 5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add not found: ID does not exist" containerID="5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.515636 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add"} err="failed to get container status \"5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add\": rpc error: code = NotFound desc = could not find container \"5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add\": container with ID starting with 5edf506eddaa2ac650de8ed476d2a4bde7aace7f18dc3cc63ffe6350df6a9add not found: ID does not exist" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.567584 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"defb5f2d-053c-4b32-beb1-d10d70bacce1","Type":"ContainerStarted","Data":"17e25bf33dcd118f48a8e8f7cae037f543abe8f9a7ffe1c912b57bf6e4df359b"} Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.587238 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.855640856 podStartE2EDuration="13.587217328s" podCreationTimestamp="2026-01-21 15:53:56 +0000 UTC" firstStartedPulling="2026-01-21 15:53:57.570318075 +0000 UTC m=+1319.931760484" lastFinishedPulling="2026-01-21 15:54:09.301894547 +0000 UTC m=+1331.663336956" observedRunningTime="2026-01-21 15:54:09.585245879 +0000 UTC m=+1331.946688288" watchObservedRunningTime="2026-01-21 15:54:09.587217328 +0000 UTC m=+1331.948659757" Jan 21 15:54:09 crc kubenswrapper[4890]: I0121 15:54:09.758856 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:10 crc kubenswrapper[4890]: I0121 15:54:10.581381 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerStarted","Data":"1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02"} Jan 21 15:54:10 crc kubenswrapper[4890]: I0121 15:54:10.581699 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerStarted","Data":"b4d0088744200b88ecfbcff80521ed23ab17628f5739587b6a6fcb638f511647"} Jan 21 15:54:11 crc kubenswrapper[4890]: I0121 15:54:11.595561 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerStarted","Data":"ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e"} Jan 21 15:54:12 crc kubenswrapper[4890]: I0121 15:54:12.060309 4890 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod58c356dd-aad0-4de6-bf7e-8d0031f22429"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod58c356dd-aad0-4de6-bf7e-8d0031f22429] : Timed out while waiting for systemd to remove kubepods-besteffort-pod58c356dd_aad0_4de6_bf7e_8d0031f22429.slice" Jan 21 15:54:12 crc kubenswrapper[4890]: I0121 15:54:12.617681 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerStarted","Data":"326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e"} Jan 21 15:54:13 crc kubenswrapper[4890]: I0121 15:54:13.628579 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerStarted","Data":"3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176"} Jan 21 15:54:13 crc kubenswrapper[4890]: I0121 15:54:13.628883 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-notification-agent" containerID="cri-o://ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e" gracePeriod=30 Jan 21 15:54:13 crc kubenswrapper[4890]: I0121 15:54:13.628898 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="sg-core" containerID="cri-o://326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e" gracePeriod=30 Jan 21 15:54:13 crc kubenswrapper[4890]: I0121 15:54:13.628834 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="proxy-httpd" containerID="cri-o://3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176" gracePeriod=30 Jan 21 15:54:13 crc kubenswrapper[4890]: I0121 15:54:13.628904 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:54:13 crc kubenswrapper[4890]: I0121 15:54:13.628764 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-central-agent" containerID="cri-o://1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02" gracePeriod=30 Jan 21 15:54:13 crc kubenswrapper[4890]: I0121 15:54:13.660922 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.101687103 podStartE2EDuration="10.660899723s" podCreationTimestamp="2026-01-21 15:54:03 +0000 UTC" firstStartedPulling="2026-01-21 15:54:09.765784026 +0000 UTC m=+1332.127226445" lastFinishedPulling="2026-01-21 15:54:13.324996646 +0000 UTC m=+1335.686439065" observedRunningTime="2026-01-21 15:54:13.653074469 +0000 UTC m=+1336.014516888" watchObservedRunningTime="2026-01-21 15:54:13.660899723 +0000 UTC m=+1336.022342132" Jan 21 15:54:14 crc kubenswrapper[4890]: I0121 15:54:14.641091 4890 generic.go:334] "Generic (PLEG): container finished" podID="7882bed4-6915-422f-a653-c6f841363752" containerID="326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e" exitCode=2 Jan 21 15:54:14 crc kubenswrapper[4890]: I0121 15:54:14.641122 4890 generic.go:334] "Generic (PLEG): container finished" podID="7882bed4-6915-422f-a653-c6f841363752" containerID="ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e" exitCode=0 Jan 21 15:54:14 crc kubenswrapper[4890]: I0121 15:54:14.641133 4890 generic.go:334] "Generic (PLEG): container finished" podID="7882bed4-6915-422f-a653-c6f841363752" containerID="1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02" exitCode=0 Jan 21 15:54:14 crc kubenswrapper[4890]: I0121 15:54:14.641156 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerDied","Data":"326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e"} Jan 21 15:54:14 crc kubenswrapper[4890]: I0121 15:54:14.641185 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerDied","Data":"ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e"} Jan 21 15:54:14 crc kubenswrapper[4890]: I0121 15:54:14.641195 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerDied","Data":"1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02"} Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.124589 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-q2zlv"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.125952 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.135511 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q2zlv"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.159011 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a95a75-e775-40eb-8e62-74b4e9b04f1f-operator-scripts\") pod \"nova-api-db-create-q2zlv\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.159135 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rlt\" (UniqueName: \"kubernetes.io/projected/11a95a75-e775-40eb-8e62-74b4e9b04f1f-kube-api-access-64rlt\") pod \"nova-api-db-create-q2zlv\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.227531 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xgb85"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.228882 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.257689 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xgb85"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.260852 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6jb\" (UniqueName: \"kubernetes.io/projected/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-kube-api-access-gr6jb\") pod \"nova-cell0-db-create-xgb85\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.260900 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a95a75-e775-40eb-8e62-74b4e9b04f1f-operator-scripts\") pod \"nova-api-db-create-q2zlv\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.261025 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rlt\" (UniqueName: \"kubernetes.io/projected/11a95a75-e775-40eb-8e62-74b4e9b04f1f-kube-api-access-64rlt\") pod \"nova-api-db-create-q2zlv\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.261120 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-operator-scripts\") pod \"nova-cell0-db-create-xgb85\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.261810 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a95a75-e775-40eb-8e62-74b4e9b04f1f-operator-scripts\") pod \"nova-api-db-create-q2zlv\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.271425 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e776-account-create-update-7pzwv"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.286416 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.292182 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rlt\" (UniqueName: \"kubernetes.io/projected/11a95a75-e775-40eb-8e62-74b4e9b04f1f-kube-api-access-64rlt\") pod \"nova-api-db-create-q2zlv\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.298346 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.304972 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e776-account-create-update-7pzwv"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.363334 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd07a67c-449b-4a93-8af5-b050a682d06b-operator-scripts\") pod \"nova-api-e776-account-create-update-7pzwv\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.363400 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85h7p\" (UniqueName: \"kubernetes.io/projected/bd07a67c-449b-4a93-8af5-b050a682d06b-kube-api-access-85h7p\") pod \"nova-api-e776-account-create-update-7pzwv\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.363467 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-operator-scripts\") pod \"nova-cell0-db-create-xgb85\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.363515 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6jb\" (UniqueName: \"kubernetes.io/projected/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-kube-api-access-gr6jb\") pod \"nova-cell0-db-create-xgb85\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.364580 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-operator-scripts\") pod \"nova-cell0-db-create-xgb85\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.383180 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6jb\" (UniqueName: \"kubernetes.io/projected/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-kube-api-access-gr6jb\") pod \"nova-cell0-db-create-xgb85\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.430509 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xxnmh"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.431876 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.441930 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xxnmh"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.442337 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.474423 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd07a67c-449b-4a93-8af5-b050a682d06b-operator-scripts\") pod \"nova-api-e776-account-create-update-7pzwv\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.474470 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85h7p\" (UniqueName: \"kubernetes.io/projected/bd07a67c-449b-4a93-8af5-b050a682d06b-kube-api-access-85h7p\") pod \"nova-api-e776-account-create-update-7pzwv\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.474511 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/916a58e6-1bc6-47d4-a82d-15979fbf9dea-operator-scripts\") pod \"nova-cell1-db-create-xxnmh\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.474623 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8w9\" (UniqueName: \"kubernetes.io/projected/916a58e6-1bc6-47d4-a82d-15979fbf9dea-kube-api-access-qb8w9\") pod \"nova-cell1-db-create-xxnmh\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.475330 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd07a67c-449b-4a93-8af5-b050a682d06b-operator-scripts\") pod \"nova-api-e776-account-create-update-7pzwv\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.493058 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85h7p\" (UniqueName: \"kubernetes.io/projected/bd07a67c-449b-4a93-8af5-b050a682d06b-kube-api-access-85h7p\") pod \"nova-api-e776-account-create-update-7pzwv\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.505729 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-43f6-account-create-update-qd2ns"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.507025 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.511517 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.543095 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-43f6-account-create-update-qd2ns"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.549581 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.575871 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d291e4f-5daf-4e1a-888f-10df2538d171-operator-scripts\") pod \"nova-cell0-43f6-account-create-update-qd2ns\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.576002 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/916a58e6-1bc6-47d4-a82d-15979fbf9dea-operator-scripts\") pod \"nova-cell1-db-create-xxnmh\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.576056 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zwc2\" (UniqueName: \"kubernetes.io/projected/3d291e4f-5daf-4e1a-888f-10df2538d171-kube-api-access-8zwc2\") pod \"nova-cell0-43f6-account-create-update-qd2ns\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.576201 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb8w9\" (UniqueName: \"kubernetes.io/projected/916a58e6-1bc6-47d4-a82d-15979fbf9dea-kube-api-access-qb8w9\") pod \"nova-cell1-db-create-xxnmh\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.577933 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/916a58e6-1bc6-47d4-a82d-15979fbf9dea-operator-scripts\") pod \"nova-cell1-db-create-xxnmh\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.596570 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb8w9\" (UniqueName: \"kubernetes.io/projected/916a58e6-1bc6-47d4-a82d-15979fbf9dea-kube-api-access-qb8w9\") pod \"nova-cell1-db-create-xxnmh\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.651214 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d5d0-account-create-update-nlvhz"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.652745 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.658974 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.675589 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.677655 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zwc2\" (UniqueName: \"kubernetes.io/projected/3d291e4f-5daf-4e1a-888f-10df2538d171-kube-api-access-8zwc2\") pod \"nova-cell0-43f6-account-create-update-qd2ns\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.677765 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d291e4f-5daf-4e1a-888f-10df2538d171-operator-scripts\") pod \"nova-cell0-43f6-account-create-update-qd2ns\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.678413 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d291e4f-5daf-4e1a-888f-10df2538d171-operator-scripts\") pod \"nova-cell0-43f6-account-create-update-qd2ns\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.680834 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d5d0-account-create-update-nlvhz"] Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.697614 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zwc2\" (UniqueName: \"kubernetes.io/projected/3d291e4f-5daf-4e1a-888f-10df2538d171-kube-api-access-8zwc2\") pod \"nova-cell0-43f6-account-create-update-qd2ns\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.757117 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.779548 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353bc295-c08f-40a8-97eb-a6d110737f71-operator-scripts\") pod \"nova-cell1-d5d0-account-create-update-nlvhz\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.779655 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqw7\" (UniqueName: \"kubernetes.io/projected/353bc295-c08f-40a8-97eb-a6d110737f71-kube-api-access-dkqw7\") pod \"nova-cell1-d5d0-account-create-update-nlvhz\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.882369 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353bc295-c08f-40a8-97eb-a6d110737f71-operator-scripts\") pod \"nova-cell1-d5d0-account-create-update-nlvhz\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.882471 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkqw7\" (UniqueName: \"kubernetes.io/projected/353bc295-c08f-40a8-97eb-a6d110737f71-kube-api-access-dkqw7\") pod \"nova-cell1-d5d0-account-create-update-nlvhz\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.883720 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353bc295-c08f-40a8-97eb-a6d110737f71-operator-scripts\") pod \"nova-cell1-d5d0-account-create-update-nlvhz\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.899870 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkqw7\" (UniqueName: \"kubernetes.io/projected/353bc295-c08f-40a8-97eb-a6d110737f71-kube-api-access-dkqw7\") pod \"nova-cell1-d5d0-account-create-update-nlvhz\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.951711 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.977567 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:16 crc kubenswrapper[4890]: I0121 15:54:16.996414 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q2zlv"] Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.155052 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xgb85"] Jan 21 15:54:17 crc kubenswrapper[4890]: W0121 15:54:17.351576 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd07a67c_449b_4a93_8af5_b050a682d06b.slice/crio-0b2de5fb7eb6d8718926bdc0f506b5e1b5a58ff141f25a5006603ba80e748be8 WatchSource:0}: Error finding container 0b2de5fb7eb6d8718926bdc0f506b5e1b5a58ff141f25a5006603ba80e748be8: Status 404 returned error can't find the container with id 0b2de5fb7eb6d8718926bdc0f506b5e1b5a58ff141f25a5006603ba80e748be8 Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.419583 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e776-account-create-update-7pzwv"] Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.469618 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xxnmh"] Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.619668 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-43f6-account-create-update-qd2ns"] Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.702346 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e776-account-create-update-7pzwv" event={"ID":"bd07a67c-449b-4a93-8af5-b050a682d06b","Type":"ContainerStarted","Data":"0b2de5fb7eb6d8718926bdc0f506b5e1b5a58ff141f25a5006603ba80e748be8"} Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.703618 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xxnmh" event={"ID":"916a58e6-1bc6-47d4-a82d-15979fbf9dea","Type":"ContainerStarted","Data":"42fec978ef42396a4997b5259a21694613c917c9d93c0b24ef3c68b0acd12bdb"} Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.705795 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q2zlv" event={"ID":"11a95a75-e775-40eb-8e62-74b4e9b04f1f","Type":"ContainerStarted","Data":"11db58464c557dc77655a8fd1850c22413732365a8fefd05581911c35155a41d"} Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.705841 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q2zlv" event={"ID":"11a95a75-e775-40eb-8e62-74b4e9b04f1f","Type":"ContainerStarted","Data":"3e3b2634b5059f5086a9241beb0e243f34507c137c81da3711fc4deb8bf67cc5"} Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.707433 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" event={"ID":"3d291e4f-5daf-4e1a-888f-10df2538d171","Type":"ContainerStarted","Data":"4891c2792495cd47d09a188d8f0ac32f81ebdbda575892d941c6aa9385595e40"} Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.709686 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xgb85" event={"ID":"2d6dfa05-969b-4691-84e5-7ca46d82b5c2","Type":"ContainerStarted","Data":"6772a790cec8c1317eefe66fb08e48511477a083656c68be868332c54d81cfd3"} Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.709717 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xgb85" event={"ID":"2d6dfa05-969b-4691-84e5-7ca46d82b5c2","Type":"ContainerStarted","Data":"d36f22070cba34d19601c8fa5a479343a2ebcb78dc267460569c4cd96474c8d8"} Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.743146 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-xgb85" podStartSLOduration=1.74312701 podStartE2EDuration="1.74312701s" podCreationTimestamp="2026-01-21 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:54:17.73668016 +0000 UTC m=+1340.098122569" watchObservedRunningTime="2026-01-21 15:54:17.74312701 +0000 UTC m=+1340.104569419" Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.747253 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-q2zlv" podStartSLOduration=1.7472401629999998 podStartE2EDuration="1.747240163s" podCreationTimestamp="2026-01-21 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:54:17.724318553 +0000 UTC m=+1340.085760962" watchObservedRunningTime="2026-01-21 15:54:17.747240163 +0000 UTC m=+1340.108682572" Jan 21 15:54:17 crc kubenswrapper[4890]: I0121 15:54:17.762626 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d5d0-account-create-update-nlvhz"] Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.685416 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.691434 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-log" containerID="cri-o://dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0" gracePeriod=30 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.691548 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-httpd" containerID="cri-o://fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419" gracePeriod=30 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.721628 4890 generic.go:334] "Generic (PLEG): container finished" podID="353bc295-c08f-40a8-97eb-a6d110737f71" containerID="4f66fb67fe1dced259cb614ff78069ee18ae227a09963d6bdd3e2e7adb0e8d72" exitCode=0 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.721703 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" event={"ID":"353bc295-c08f-40a8-97eb-a6d110737f71","Type":"ContainerDied","Data":"4f66fb67fe1dced259cb614ff78069ee18ae227a09963d6bdd3e2e7adb0e8d72"} Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.721734 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" event={"ID":"353bc295-c08f-40a8-97eb-a6d110737f71","Type":"ContainerStarted","Data":"98ca6311154f6371f78b98c7e1c5b810f5c57cb6f0ea1245edfd433ac9379b77"} Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.734917 4890 generic.go:334] "Generic (PLEG): container finished" podID="3d291e4f-5daf-4e1a-888f-10df2538d171" containerID="f1c1f17d85bf8bb53eef6757f0bd48539a6d6eae586a8fb378144b48d4a6d1e0" exitCode=0 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.735017 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" event={"ID":"3d291e4f-5daf-4e1a-888f-10df2538d171","Type":"ContainerDied","Data":"f1c1f17d85bf8bb53eef6757f0bd48539a6d6eae586a8fb378144b48d4a6d1e0"} Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.745735 4890 generic.go:334] "Generic (PLEG): container finished" podID="2d6dfa05-969b-4691-84e5-7ca46d82b5c2" containerID="6772a790cec8c1317eefe66fb08e48511477a083656c68be868332c54d81cfd3" exitCode=0 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.745841 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xgb85" event={"ID":"2d6dfa05-969b-4691-84e5-7ca46d82b5c2","Type":"ContainerDied","Data":"6772a790cec8c1317eefe66fb08e48511477a083656c68be868332c54d81cfd3"} Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.763696 4890 generic.go:334] "Generic (PLEG): container finished" podID="bd07a67c-449b-4a93-8af5-b050a682d06b" containerID="745ce70403665468b313680739c5b2b32b49b16f488cddbad84df4f2edafcc2c" exitCode=0 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.763791 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e776-account-create-update-7pzwv" event={"ID":"bd07a67c-449b-4a93-8af5-b050a682d06b","Type":"ContainerDied","Data":"745ce70403665468b313680739c5b2b32b49b16f488cddbad84df4f2edafcc2c"} Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.781927 4890 generic.go:334] "Generic (PLEG): container finished" podID="916a58e6-1bc6-47d4-a82d-15979fbf9dea" containerID="3cbd1e381f8a05f81b896d527f93626ca4b45dcad610a582cdfcabb31938696c" exitCode=0 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.782423 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xxnmh" event={"ID":"916a58e6-1bc6-47d4-a82d-15979fbf9dea","Type":"ContainerDied","Data":"3cbd1e381f8a05f81b896d527f93626ca4b45dcad610a582cdfcabb31938696c"} Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.784283 4890 generic.go:334] "Generic (PLEG): container finished" podID="11a95a75-e775-40eb-8e62-74b4e9b04f1f" containerID="11db58464c557dc77655a8fd1850c22413732365a8fefd05581911c35155a41d" exitCode=0 Jan 21 15:54:18 crc kubenswrapper[4890]: I0121 15:54:18.784333 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q2zlv" event={"ID":"11a95a75-e775-40eb-8e62-74b4e9b04f1f","Type":"ContainerDied","Data":"11db58464c557dc77655a8fd1850c22413732365a8fefd05581911c35155a41d"} Jan 21 15:54:19 crc kubenswrapper[4890]: I0121 15:54:19.397953 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:54:19 crc kubenswrapper[4890]: I0121 15:54:19.398316 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-log" containerID="cri-o://f24965e1823d8ec92fee963545e1ae34491cd2ced770b9228ea5dd223fd108a6" gracePeriod=30 Jan 21 15:54:19 crc kubenswrapper[4890]: I0121 15:54:19.398418 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-httpd" containerID="cri-o://7a84f64e5ecf6747684f4ee3339734668618656034a8696a2604e4233cf17843" gracePeriod=30 Jan 21 15:54:19 crc kubenswrapper[4890]: I0121 15:54:19.793914 4890 generic.go:334] "Generic (PLEG): container finished" podID="7e221599-8207-445a-bdbf-79cc7b21590a" containerID="f24965e1823d8ec92fee963545e1ae34491cd2ced770b9228ea5dd223fd108a6" exitCode=143 Jan 21 15:54:19 crc kubenswrapper[4890]: I0121 15:54:19.794203 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e221599-8207-445a-bdbf-79cc7b21590a","Type":"ContainerDied","Data":"f24965e1823d8ec92fee963545e1ae34491cd2ced770b9228ea5dd223fd108a6"} Jan 21 15:54:19 crc kubenswrapper[4890]: I0121 15:54:19.796163 4890 generic.go:334] "Generic (PLEG): container finished" podID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerID="dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0" exitCode=143 Jan 21 15:54:19 crc kubenswrapper[4890]: I0121 15:54:19.796273 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d57adef6-94fe-4333-bf61-5ec2e55af351","Type":"ContainerDied","Data":"dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0"} Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.232280 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.340812 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353bc295-c08f-40a8-97eb-a6d110737f71-operator-scripts\") pod \"353bc295-c08f-40a8-97eb-a6d110737f71\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.340977 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkqw7\" (UniqueName: \"kubernetes.io/projected/353bc295-c08f-40a8-97eb-a6d110737f71-kube-api-access-dkqw7\") pod \"353bc295-c08f-40a8-97eb-a6d110737f71\" (UID: \"353bc295-c08f-40a8-97eb-a6d110737f71\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.345951 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/353bc295-c08f-40a8-97eb-a6d110737f71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "353bc295-c08f-40a8-97eb-a6d110737f71" (UID: "353bc295-c08f-40a8-97eb-a6d110737f71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.355631 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353bc295-c08f-40a8-97eb-a6d110737f71-kube-api-access-dkqw7" (OuterVolumeSpecName: "kube-api-access-dkqw7") pod "353bc295-c08f-40a8-97eb-a6d110737f71" (UID: "353bc295-c08f-40a8-97eb-a6d110737f71"). InnerVolumeSpecName "kube-api-access-dkqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.407185 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.411536 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.417674 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.425932 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.442555 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.444331 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkqw7\" (UniqueName: \"kubernetes.io/projected/353bc295-c08f-40a8-97eb-a6d110737f71-kube-api-access-dkqw7\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.444366 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353bc295-c08f-40a8-97eb-a6d110737f71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.545569 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-operator-scripts\") pod \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.545727 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb8w9\" (UniqueName: \"kubernetes.io/projected/916a58e6-1bc6-47d4-a82d-15979fbf9dea-kube-api-access-qb8w9\") pod \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.545787 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85h7p\" (UniqueName: \"kubernetes.io/projected/bd07a67c-449b-4a93-8af5-b050a682d06b-kube-api-access-85h7p\") pod \"bd07a67c-449b-4a93-8af5-b050a682d06b\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.545853 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd07a67c-449b-4a93-8af5-b050a682d06b-operator-scripts\") pod \"bd07a67c-449b-4a93-8af5-b050a682d06b\" (UID: \"bd07a67c-449b-4a93-8af5-b050a682d06b\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.545881 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64rlt\" (UniqueName: \"kubernetes.io/projected/11a95a75-e775-40eb-8e62-74b4e9b04f1f-kube-api-access-64rlt\") pod \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.545925 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr6jb\" (UniqueName: \"kubernetes.io/projected/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-kube-api-access-gr6jb\") pod \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\" (UID: \"2d6dfa05-969b-4691-84e5-7ca46d82b5c2\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.546011 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/916a58e6-1bc6-47d4-a82d-15979fbf9dea-operator-scripts\") pod \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\" (UID: \"916a58e6-1bc6-47d4-a82d-15979fbf9dea\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.546089 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d291e4f-5daf-4e1a-888f-10df2538d171-operator-scripts\") pod \"3d291e4f-5daf-4e1a-888f-10df2538d171\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.546173 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zwc2\" (UniqueName: \"kubernetes.io/projected/3d291e4f-5daf-4e1a-888f-10df2538d171-kube-api-access-8zwc2\") pod \"3d291e4f-5daf-4e1a-888f-10df2538d171\" (UID: \"3d291e4f-5daf-4e1a-888f-10df2538d171\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.546237 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a95a75-e775-40eb-8e62-74b4e9b04f1f-operator-scripts\") pod \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\" (UID: \"11a95a75-e775-40eb-8e62-74b4e9b04f1f\") " Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.547745 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d291e4f-5daf-4e1a-888f-10df2538d171-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d291e4f-5daf-4e1a-888f-10df2538d171" (UID: "3d291e4f-5daf-4e1a-888f-10df2538d171"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.547751 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916a58e6-1bc6-47d4-a82d-15979fbf9dea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "916a58e6-1bc6-47d4-a82d-15979fbf9dea" (UID: "916a58e6-1bc6-47d4-a82d-15979fbf9dea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.548519 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d6dfa05-969b-4691-84e5-7ca46d82b5c2" (UID: "2d6dfa05-969b-4691-84e5-7ca46d82b5c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.556140 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a95a75-e775-40eb-8e62-74b4e9b04f1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11a95a75-e775-40eb-8e62-74b4e9b04f1f" (UID: "11a95a75-e775-40eb-8e62-74b4e9b04f1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.561643 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd07a67c-449b-4a93-8af5-b050a682d06b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd07a67c-449b-4a93-8af5-b050a682d06b" (UID: "bd07a67c-449b-4a93-8af5-b050a682d06b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.567509 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-kube-api-access-gr6jb" (OuterVolumeSpecName: "kube-api-access-gr6jb") pod "2d6dfa05-969b-4691-84e5-7ca46d82b5c2" (UID: "2d6dfa05-969b-4691-84e5-7ca46d82b5c2"). InnerVolumeSpecName "kube-api-access-gr6jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.567600 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd07a67c-449b-4a93-8af5-b050a682d06b-kube-api-access-85h7p" (OuterVolumeSpecName: "kube-api-access-85h7p") pod "bd07a67c-449b-4a93-8af5-b050a682d06b" (UID: "bd07a67c-449b-4a93-8af5-b050a682d06b"). InnerVolumeSpecName "kube-api-access-85h7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.568141 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916a58e6-1bc6-47d4-a82d-15979fbf9dea-kube-api-access-qb8w9" (OuterVolumeSpecName: "kube-api-access-qb8w9") pod "916a58e6-1bc6-47d4-a82d-15979fbf9dea" (UID: "916a58e6-1bc6-47d4-a82d-15979fbf9dea"). InnerVolumeSpecName "kube-api-access-qb8w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.575536 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d291e4f-5daf-4e1a-888f-10df2538d171-kube-api-access-8zwc2" (OuterVolumeSpecName: "kube-api-access-8zwc2") pod "3d291e4f-5daf-4e1a-888f-10df2538d171" (UID: "3d291e4f-5daf-4e1a-888f-10df2538d171"). InnerVolumeSpecName "kube-api-access-8zwc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.575619 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a95a75-e775-40eb-8e62-74b4e9b04f1f-kube-api-access-64rlt" (OuterVolumeSpecName: "kube-api-access-64rlt") pod "11a95a75-e775-40eb-8e62-74b4e9b04f1f" (UID: "11a95a75-e775-40eb-8e62-74b4e9b04f1f"). InnerVolumeSpecName "kube-api-access-64rlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648065 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/916a58e6-1bc6-47d4-a82d-15979fbf9dea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648106 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d291e4f-5daf-4e1a-888f-10df2538d171-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648119 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zwc2\" (UniqueName: \"kubernetes.io/projected/3d291e4f-5daf-4e1a-888f-10df2538d171-kube-api-access-8zwc2\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648139 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a95a75-e775-40eb-8e62-74b4e9b04f1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648149 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648160 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb8w9\" (UniqueName: \"kubernetes.io/projected/916a58e6-1bc6-47d4-a82d-15979fbf9dea-kube-api-access-qb8w9\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648170 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85h7p\" (UniqueName: \"kubernetes.io/projected/bd07a67c-449b-4a93-8af5-b050a682d06b-kube-api-access-85h7p\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648181 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd07a67c-449b-4a93-8af5-b050a682d06b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648192 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64rlt\" (UniqueName: \"kubernetes.io/projected/11a95a75-e775-40eb-8e62-74b4e9b04f1f-kube-api-access-64rlt\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.648203 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr6jb\" (UniqueName: \"kubernetes.io/projected/2d6dfa05-969b-4691-84e5-7ca46d82b5c2-kube-api-access-gr6jb\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.805083 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xgb85" event={"ID":"2d6dfa05-969b-4691-84e5-7ca46d82b5c2","Type":"ContainerDied","Data":"d36f22070cba34d19601c8fa5a479343a2ebcb78dc267460569c4cd96474c8d8"} Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.805117 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xgb85" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.805127 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d36f22070cba34d19601c8fa5a479343a2ebcb78dc267460569c4cd96474c8d8" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.806857 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e776-account-create-update-7pzwv" event={"ID":"bd07a67c-449b-4a93-8af5-b050a682d06b","Type":"ContainerDied","Data":"0b2de5fb7eb6d8718926bdc0f506b5e1b5a58ff141f25a5006603ba80e748be8"} Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.806886 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e776-account-create-update-7pzwv" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.806897 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b2de5fb7eb6d8718926bdc0f506b5e1b5a58ff141f25a5006603ba80e748be8" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.808312 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xxnmh" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.808341 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xxnmh" event={"ID":"916a58e6-1bc6-47d4-a82d-15979fbf9dea","Type":"ContainerDied","Data":"42fec978ef42396a4997b5259a21694613c917c9d93c0b24ef3c68b0acd12bdb"} Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.808440 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42fec978ef42396a4997b5259a21694613c917c9d93c0b24ef3c68b0acd12bdb" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.816755 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q2zlv" event={"ID":"11a95a75-e775-40eb-8e62-74b4e9b04f1f","Type":"ContainerDied","Data":"3e3b2634b5059f5086a9241beb0e243f34507c137c81da3711fc4deb8bf67cc5"} Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.816798 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3b2634b5059f5086a9241beb0e243f34507c137c81da3711fc4deb8bf67cc5" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.816870 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q2zlv" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.820294 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" event={"ID":"353bc295-c08f-40a8-97eb-a6d110737f71","Type":"ContainerDied","Data":"98ca6311154f6371f78b98c7e1c5b810f5c57cb6f0ea1245edfd433ac9379b77"} Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.820325 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ca6311154f6371f78b98c7e1c5b810f5c57cb6f0ea1245edfd433ac9379b77" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.820405 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d5d0-account-create-update-nlvhz" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.822091 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" event={"ID":"3d291e4f-5daf-4e1a-888f-10df2538d171","Type":"ContainerDied","Data":"4891c2792495cd47d09a188d8f0ac32f81ebdbda575892d941c6aa9385595e40"} Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.822119 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4891c2792495cd47d09a188d8f0ac32f81ebdbda575892d941c6aa9385595e40" Jan 21 15:54:20 crc kubenswrapper[4890]: I0121 15:54:20.822145 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43f6-account-create-update-qd2ns" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.382949 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.485963 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-logs\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486036 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-public-tls-certs\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486057 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-combined-ca-bundle\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486093 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4dmg\" (UniqueName: \"kubernetes.io/projected/d57adef6-94fe-4333-bf61-5ec2e55af351-kube-api-access-f4dmg\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486129 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486164 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-httpd-run\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486220 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-config-data\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486268 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-scripts\") pod \"d57adef6-94fe-4333-bf61-5ec2e55af351\" (UID: \"d57adef6-94fe-4333-bf61-5ec2e55af351\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486580 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-logs" (OuterVolumeSpecName: "logs") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.486823 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.488787 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.495211 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.497597 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57adef6-94fe-4333-bf61-5ec2e55af351-kube-api-access-f4dmg" (OuterVolumeSpecName: "kube-api-access-f4dmg") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "kube-api-access-f4dmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.514296 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-scripts" (OuterVolumeSpecName: "scripts") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.557469 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-config-data" (OuterVolumeSpecName: "config-data") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.560324 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.567053 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d57adef6-94fe-4333-bf61-5ec2e55af351" (UID: "d57adef6-94fe-4333-bf61-5ec2e55af351"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.588821 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.589075 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.589134 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.589193 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57adef6-94fe-4333-bf61-5ec2e55af351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.589255 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4dmg\" (UniqueName: \"kubernetes.io/projected/d57adef6-94fe-4333-bf61-5ec2e55af351-kube-api-access-f4dmg\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.589325 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.589436 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d57adef6-94fe-4333-bf61-5ec2e55af351-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.609024 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.691033 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.843920 4890 generic.go:334] "Generic (PLEG): container finished" podID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerID="fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419" exitCode=0 Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.843977 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.843995 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d57adef6-94fe-4333-bf61-5ec2e55af351","Type":"ContainerDied","Data":"fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419"} Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.844470 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d57adef6-94fe-4333-bf61-5ec2e55af351","Type":"ContainerDied","Data":"75fa82c99fc465605faade3472d1c9134d40b37995440c5e8b4d227d6dd65722"} Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.844498 4890 scope.go:117] "RemoveContainer" containerID="fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.846959 4890 generic.go:334] "Generic (PLEG): container finished" podID="7e221599-8207-445a-bdbf-79cc7b21590a" containerID="7a84f64e5ecf6747684f4ee3339734668618656034a8696a2604e4233cf17843" exitCode=0 Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.847018 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e221599-8207-445a-bdbf-79cc7b21590a","Type":"ContainerDied","Data":"7a84f64e5ecf6747684f4ee3339734668618656034a8696a2604e4233cf17843"} Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.876382 4890 scope.go:117] "RemoveContainer" containerID="dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.900648 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.917291 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.917725 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.928323 4890 scope.go:117] "RemoveContainer" containerID="fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.929095 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419\": container with ID starting with fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419 not found: ID does not exist" containerID="fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.929134 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419"} err="failed to get container status \"fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419\": rpc error: code = NotFound desc = could not find container \"fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419\": container with ID starting with fe5c927bed0a9aafe2a532d918b75f1ec632e2a457050f866a906494bb301419 not found: ID does not exist" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.929232 4890 scope.go:117] "RemoveContainer" containerID="dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.929494 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0\": container with ID starting with dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0 not found: ID does not exist" containerID="dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.929522 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0"} err="failed to get container status \"dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0\": rpc error: code = NotFound desc = could not find container \"dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0\": container with ID starting with dc4c8851c2963bbad3616a54a09461069b35b241c0f302a9930a41cc30d14dd0 not found: ID does not exist" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.930706 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931118 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916a58e6-1bc6-47d4-a82d-15979fbf9dea" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931139 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="916a58e6-1bc6-47d4-a82d-15979fbf9dea" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931161 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6dfa05-969b-4691-84e5-7ca46d82b5c2" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931169 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6dfa05-969b-4691-84e5-7ca46d82b5c2" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931180 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d291e4f-5daf-4e1a-888f-10df2538d171" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931216 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d291e4f-5daf-4e1a-888f-10df2538d171" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931243 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-httpd" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931253 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-httpd" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931262 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd07a67c-449b-4a93-8af5-b050a682d06b" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931268 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd07a67c-449b-4a93-8af5-b050a682d06b" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931284 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353bc295-c08f-40a8-97eb-a6d110737f71" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931291 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="353bc295-c08f-40a8-97eb-a6d110737f71" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931301 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-httpd" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931308 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-httpd" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931319 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-log" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931326 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-log" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931342 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a95a75-e775-40eb-8e62-74b4e9b04f1f" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931368 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a95a75-e775-40eb-8e62-74b4e9b04f1f" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: E0121 15:54:22.931380 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-log" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931387 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-log" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931571 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6dfa05-969b-4691-84e5-7ca46d82b5c2" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931588 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-log" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931600 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a95a75-e775-40eb-8e62-74b4e9b04f1f" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931616 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="353bc295-c08f-40a8-97eb-a6d110737f71" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931632 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-log" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931644 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d291e4f-5daf-4e1a-888f-10df2538d171" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931653 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" containerName="glance-httpd" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931664 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd07a67c-449b-4a93-8af5-b050a682d06b" containerName="mariadb-account-create-update" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931672 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="916a58e6-1bc6-47d4-a82d-15979fbf9dea" containerName="mariadb-database-create" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.931684 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" containerName="glance-httpd" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.934373 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.937219 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.937252 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.942437 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997092 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-httpd-run\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997141 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-logs\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997217 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-config-data\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997244 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl8wq\" (UniqueName: \"kubernetes.io/projected/7e221599-8207-445a-bdbf-79cc7b21590a-kube-api-access-rl8wq\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997380 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-scripts\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997422 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-internal-tls-certs\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997455 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:22 crc kubenswrapper[4890]: I0121 15:54:22.997503 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-combined-ca-bundle\") pod \"7e221599-8207-445a-bdbf-79cc7b21590a\" (UID: \"7e221599-8207-445a-bdbf-79cc7b21590a\") " Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.001933 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.008886 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-scripts" (OuterVolumeSpecName: "scripts") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.009171 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-logs" (OuterVolumeSpecName: "logs") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.020023 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.046228 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e221599-8207-445a-bdbf-79cc7b21590a-kube-api-access-rl8wq" (OuterVolumeSpecName: "kube-api-access-rl8wq") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "kube-api-access-rl8wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.073624 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.093587 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.096604 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-config-data" (OuterVolumeSpecName: "config-data") pod "7e221599-8207-445a-bdbf-79cc7b21590a" (UID: "7e221599-8207-445a-bdbf-79cc7b21590a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.104458 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.104519 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.104541 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.104603 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.104755 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.104829 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.104906 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47dnj\" (UniqueName: \"kubernetes.io/projected/e775a69e-619f-4920-8fc9-6d216e400c0e-kube-api-access-47dnj\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.105004 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-logs\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.105210 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.105253 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e221599-8207-445a-bdbf-79cc7b21590a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.105273 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.105286 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl8wq\" (UniqueName: \"kubernetes.io/projected/7e221599-8207-445a-bdbf-79cc7b21590a-kube-api-access-rl8wq\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.105300 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.105312 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.115169 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.115222 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e221599-8207-445a-bdbf-79cc7b21590a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.144287 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216718 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216783 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216814 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216835 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47dnj\" (UniqueName: \"kubernetes.io/projected/e775a69e-619f-4920-8fc9-6d216e400c0e-kube-api-access-47dnj\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216861 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-logs\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216907 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216944 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.216962 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.217085 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.219127 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.225090 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.225446 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.225650 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.225697 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-logs\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.226163 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.232838 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.244698 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47dnj\" (UniqueName: \"kubernetes.io/projected/e775a69e-619f-4920-8fc9-6d216e400c0e-kube-api-access-47dnj\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.258801 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.281827 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.856084 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e221599-8207-445a-bdbf-79cc7b21590a","Type":"ContainerDied","Data":"1f914164fb7e43008552114c062f804aa5ac5217e8e9840209a21d53bc2bd24a"} Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.856437 4890 scope.go:117] "RemoveContainer" containerID="7a84f64e5ecf6747684f4ee3339734668618656034a8696a2604e4233cf17843" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.856140 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.880217 4890 scope.go:117] "RemoveContainer" containerID="f24965e1823d8ec92fee963545e1ae34491cd2ced770b9228ea5dd223fd108a6" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.910708 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.930636 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57adef6-94fe-4333-bf61-5ec2e55af351" path="/var/lib/kubelet/pods/d57adef6-94fe-4333-bf61-5ec2e55af351/volumes" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.931462 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.972430 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.974934 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.980093 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.980386 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:54:23 crc kubenswrapper[4890]: I0121 15:54:23.980475 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.022186 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136064 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-logs\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136163 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136202 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136231 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136261 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7pc\" (UniqueName: \"kubernetes.io/projected/697e1d3a-fab0-471b-bea8-43212f489fec-kube-api-access-qp7pc\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136284 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136376 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.136437 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.237734 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.237804 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.237836 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.237879 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp7pc\" (UniqueName: \"kubernetes.io/projected/697e1d3a-fab0-471b-bea8-43212f489fec-kube-api-access-qp7pc\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.237903 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.237980 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.238040 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.238066 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-logs\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.238656 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-logs\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.240023 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.240283 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.245047 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.245438 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.246023 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.250346 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.256337 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp7pc\" (UniqueName: \"kubernetes.io/projected/697e1d3a-fab0-471b-bea8-43212f489fec-kube-api-access-qp7pc\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.278472 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.305847 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.882469 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.888232 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e775a69e-619f-4920-8fc9-6d216e400c0e","Type":"ContainerStarted","Data":"1ca3498c72178f6185568c6444f79a4b05e9c4a827b67e2ab8184900041c243b"} Jan 21 15:54:24 crc kubenswrapper[4890]: I0121 15:54:24.888284 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e775a69e-619f-4920-8fc9-6d216e400c0e","Type":"ContainerStarted","Data":"bcca4b36076ab261210e22493a6b04ff5095992300367c852b98b6aa5f867e4c"} Jan 21 15:54:25 crc kubenswrapper[4890]: I0121 15:54:25.903588 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"697e1d3a-fab0-471b-bea8-43212f489fec","Type":"ContainerStarted","Data":"215061d8efd4d431d66f95a42747cad2109fdd862c0bf59e3b7f07e5e4da7f48"} Jan 21 15:54:25 crc kubenswrapper[4890]: I0121 15:54:25.927633 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e221599-8207-445a-bdbf-79cc7b21590a" path="/var/lib/kubelet/pods/7e221599-8207-445a-bdbf-79cc7b21590a/volumes" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.858747 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmzvq"] Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.860158 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.863338 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.863362 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cn85b" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.869636 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.883061 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmzvq"] Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.930624 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"697e1d3a-fab0-471b-bea8-43212f489fec","Type":"ContainerStarted","Data":"9e0291aac0c698ccda6b3ca51011fe12c6a3dfe3353a4fd388da9648e8a82def"} Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.930665 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"697e1d3a-fab0-471b-bea8-43212f489fec","Type":"ContainerStarted","Data":"a0f8f3b3b110e555d59db6b93fc91f9b56e10fd7253b81778b2e41c868e02c8a"} Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.944819 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e775a69e-619f-4920-8fc9-6d216e400c0e","Type":"ContainerStarted","Data":"449855515a900befe1127318232d23ee1ce08ab1fc81e724dd3ee85e1bdccca0"} Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.980542 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.9805059099999998 podStartE2EDuration="3.98050591s" podCreationTimestamp="2026-01-21 15:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:54:26.967311982 +0000 UTC m=+1349.328754391" watchObservedRunningTime="2026-01-21 15:54:26.98050591 +0000 UTC m=+1349.341948319" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.993051 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-config-data\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.993148 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js8sd\" (UniqueName: \"kubernetes.io/projected/581dea93-d2d2-45fd-9b38-c0829c031b5c-kube-api-access-js8sd\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.993305 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-scripts\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:26 crc kubenswrapper[4890]: I0121 15:54:26.993411 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.005663 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.005644924 podStartE2EDuration="5.005644924s" podCreationTimestamp="2026-01-21 15:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:54:27.000286431 +0000 UTC m=+1349.361728850" watchObservedRunningTime="2026-01-21 15:54:27.005644924 +0000 UTC m=+1349.367087333" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.094909 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js8sd\" (UniqueName: \"kubernetes.io/projected/581dea93-d2d2-45fd-9b38-c0829c031b5c-kube-api-access-js8sd\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.095014 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-scripts\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.095061 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.095114 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-config-data\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.100409 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-scripts\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.101519 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-config-data\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.102075 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.112174 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js8sd\" (UniqueName: \"kubernetes.io/projected/581dea93-d2d2-45fd-9b38-c0829c031b5c-kube-api-access-js8sd\") pod \"nova-cell0-conductor-db-sync-bmzvq\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.176599 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.664245 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmzvq"] Jan 21 15:54:27 crc kubenswrapper[4890]: W0121 15:54:27.667768 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod581dea93_d2d2_45fd_9b38_c0829c031b5c.slice/crio-b44653e33cd05efedaff5e18818184c73bdafcdb5aefcfabff6e52a366a1dc02 WatchSource:0}: Error finding container b44653e33cd05efedaff5e18818184c73bdafcdb5aefcfabff6e52a366a1dc02: Status 404 returned error can't find the container with id b44653e33cd05efedaff5e18818184c73bdafcdb5aefcfabff6e52a366a1dc02 Jan 21 15:54:27 crc kubenswrapper[4890]: I0121 15:54:27.954856 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" event={"ID":"581dea93-d2d2-45fd-9b38-c0829c031b5c","Type":"ContainerStarted","Data":"b44653e33cd05efedaff5e18818184c73bdafcdb5aefcfabff6e52a366a1dc02"} Jan 21 15:54:33 crc kubenswrapper[4890]: I0121 15:54:33.282973 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 15:54:33 crc kubenswrapper[4890]: I0121 15:54:33.283575 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 15:54:33 crc kubenswrapper[4890]: I0121 15:54:33.358876 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 15:54:33 crc kubenswrapper[4890]: I0121 15:54:33.371883 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 15:54:33 crc kubenswrapper[4890]: I0121 15:54:33.879328 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:54:34 crc kubenswrapper[4890]: I0121 15:54:34.027790 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 15:54:34 crc kubenswrapper[4890]: I0121 15:54:34.027857 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 15:54:34 crc kubenswrapper[4890]: I0121 15:54:34.307721 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:34 crc kubenswrapper[4890]: I0121 15:54:34.308528 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:34 crc kubenswrapper[4890]: I0121 15:54:34.356621 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:34 crc kubenswrapper[4890]: I0121 15:54:34.363133 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:35 crc kubenswrapper[4890]: I0121 15:54:35.063744 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:35 crc kubenswrapper[4890]: I0121 15:54:35.066386 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:36 crc kubenswrapper[4890]: I0121 15:54:36.033729 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 15:54:36 crc kubenswrapper[4890]: I0121 15:54:36.073200 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:54:36 crc kubenswrapper[4890]: I0121 15:54:36.205254 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 15:54:37 crc kubenswrapper[4890]: I0121 15:54:37.082462 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:54:37 crc kubenswrapper[4890]: I0121 15:54:37.082758 4890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:54:37 crc kubenswrapper[4890]: I0121 15:54:37.215043 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:37 crc kubenswrapper[4890]: I0121 15:54:37.270883 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 15:54:38 crc kubenswrapper[4890]: I0121 15:54:38.137424 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" event={"ID":"581dea93-d2d2-45fd-9b38-c0829c031b5c","Type":"ContainerStarted","Data":"b97e050ad2a00d179937618daed6f81f2c163cafb139936cbf8b07d6cb0ad28f"} Jan 21 15:54:38 crc kubenswrapper[4890]: I0121 15:54:38.165606 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" podStartSLOduration=2.886390694 podStartE2EDuration="12.165588521s" podCreationTimestamp="2026-01-21 15:54:26 +0000 UTC" firstStartedPulling="2026-01-21 15:54:27.669682707 +0000 UTC m=+1350.031125126" lastFinishedPulling="2026-01-21 15:54:36.948880534 +0000 UTC m=+1359.310322953" observedRunningTime="2026-01-21 15:54:38.165244083 +0000 UTC m=+1360.526686492" watchObservedRunningTime="2026-01-21 15:54:38.165588521 +0000 UTC m=+1360.527030940" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.006767 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.135196 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-run-httpd\") pod \"7882bed4-6915-422f-a653-c6f841363752\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.135616 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-combined-ca-bundle\") pod \"7882bed4-6915-422f-a653-c6f841363752\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.135664 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-scripts\") pod \"7882bed4-6915-422f-a653-c6f841363752\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.135713 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-sg-core-conf-yaml\") pod \"7882bed4-6915-422f-a653-c6f841363752\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.135732 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-config-data\") pod \"7882bed4-6915-422f-a653-c6f841363752\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.135807 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-log-httpd\") pod \"7882bed4-6915-422f-a653-c6f841363752\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.135852 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7zww\" (UniqueName: \"kubernetes.io/projected/7882bed4-6915-422f-a653-c6f841363752-kube-api-access-p7zww\") pod \"7882bed4-6915-422f-a653-c6f841363752\" (UID: \"7882bed4-6915-422f-a653-c6f841363752\") " Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.136136 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7882bed4-6915-422f-a653-c6f841363752" (UID: "7882bed4-6915-422f-a653-c6f841363752"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.136434 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7882bed4-6915-422f-a653-c6f841363752" (UID: "7882bed4-6915-422f-a653-c6f841363752"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.136535 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.142668 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7882bed4-6915-422f-a653-c6f841363752-kube-api-access-p7zww" (OuterVolumeSpecName: "kube-api-access-p7zww") pod "7882bed4-6915-422f-a653-c6f841363752" (UID: "7882bed4-6915-422f-a653-c6f841363752"). InnerVolumeSpecName "kube-api-access-p7zww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.144703 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-scripts" (OuterVolumeSpecName: "scripts") pod "7882bed4-6915-422f-a653-c6f841363752" (UID: "7882bed4-6915-422f-a653-c6f841363752"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.163520 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7882bed4-6915-422f-a653-c6f841363752" (UID: "7882bed4-6915-422f-a653-c6f841363752"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.205923 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7882bed4-6915-422f-a653-c6f841363752" (UID: "7882bed4-6915-422f-a653-c6f841363752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.211007 4890 generic.go:334] "Generic (PLEG): container finished" podID="7882bed4-6915-422f-a653-c6f841363752" containerID="3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176" exitCode=137 Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.211058 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerDied","Data":"3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176"} Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.211085 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.211101 4890 scope.go:117] "RemoveContainer" containerID="3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.211089 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7882bed4-6915-422f-a653-c6f841363752","Type":"ContainerDied","Data":"b4d0088744200b88ecfbcff80521ed23ab17628f5739587b6a6fcb638f511647"} Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.231769 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-config-data" (OuterVolumeSpecName: "config-data") pod "7882bed4-6915-422f-a653-c6f841363752" (UID: "7882bed4-6915-422f-a653-c6f841363752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.238765 4890 scope.go:117] "RemoveContainer" containerID="326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.238769 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7882bed4-6915-422f-a653-c6f841363752-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.238939 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7zww\" (UniqueName: \"kubernetes.io/projected/7882bed4-6915-422f-a653-c6f841363752-kube-api-access-p7zww\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.238955 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.238969 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.238983 4890 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.238993 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7882bed4-6915-422f-a653-c6f841363752-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.255535 4890 scope.go:117] "RemoveContainer" containerID="ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.276752 4890 scope.go:117] "RemoveContainer" containerID="1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.295936 4890 scope.go:117] "RemoveContainer" containerID="3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176" Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.296309 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176\": container with ID starting with 3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176 not found: ID does not exist" containerID="3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.296344 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176"} err="failed to get container status \"3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176\": rpc error: code = NotFound desc = could not find container \"3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176\": container with ID starting with 3bfd5430db34bf21fe4890f8d407866d0db2ab4778ffe815b4356479224df176 not found: ID does not exist" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.296393 4890 scope.go:117] "RemoveContainer" containerID="326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e" Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.296818 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e\": container with ID starting with 326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e not found: ID does not exist" containerID="326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.296845 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e"} err="failed to get container status \"326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e\": rpc error: code = NotFound desc = could not find container \"326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e\": container with ID starting with 326777f1c04ff6674fdb2829f81ea9be6de75b0cdcd62219a53b492655d5996e not found: ID does not exist" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.296862 4890 scope.go:117] "RemoveContainer" containerID="ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e" Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.297132 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e\": container with ID starting with ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e not found: ID does not exist" containerID="ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.297155 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e"} err="failed to get container status \"ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e\": rpc error: code = NotFound desc = could not find container \"ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e\": container with ID starting with ea09e1a7089731ac71abe9c815165eed8f85f8fbad0dac6f693945f4113fde7e not found: ID does not exist" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.297171 4890 scope.go:117] "RemoveContainer" containerID="1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02" Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.297553 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02\": container with ID starting with 1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02 not found: ID does not exist" containerID="1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.297578 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02"} err="failed to get container status \"1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02\": rpc error: code = NotFound desc = could not find container \"1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02\": container with ID starting with 1453295d2ffbd3832f8cc7ade04155f0dff2de44af9b215b2af67e0eb41deb02 not found: ID does not exist" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.562322 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.576666 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.587620 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.589336 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-notification-agent" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589376 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-notification-agent" Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.589388 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="proxy-httpd" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589394 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="proxy-httpd" Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.589413 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-central-agent" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589420 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-central-agent" Jan 21 15:54:44 crc kubenswrapper[4890]: E0121 15:54:44.589431 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="sg-core" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589437 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="sg-core" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589607 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-central-agent" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589618 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="sg-core" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589633 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="proxy-httpd" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.589648 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7882bed4-6915-422f-a653-c6f841363752" containerName="ceilometer-notification-agent" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.591131 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.593953 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.594154 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.599728 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.647668 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-config-data\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.647712 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-scripts\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.647831 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9l4\" (UniqueName: \"kubernetes.io/projected/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-kube-api-access-9r9l4\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.647888 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.647957 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.648013 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-log-httpd\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.648063 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-run-httpd\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.749441 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-log-httpd\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.749504 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-run-httpd\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.749540 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-config-data\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.749557 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-scripts\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.749602 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r9l4\" (UniqueName: \"kubernetes.io/projected/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-kube-api-access-9r9l4\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.749636 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.749684 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.750009 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-log-httpd\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.750110 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-run-httpd\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.754724 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.754807 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.754902 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-scripts\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.755153 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-config-data\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.784556 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r9l4\" (UniqueName: \"kubernetes.io/projected/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-kube-api-access-9r9l4\") pod \"ceilometer-0\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " pod="openstack/ceilometer-0" Jan 21 15:54:44 crc kubenswrapper[4890]: I0121 15:54:44.915912 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:54:45 crc kubenswrapper[4890]: I0121 15:54:45.380667 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:54:45 crc kubenswrapper[4890]: W0121 15:54:45.385080 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33dedbb4_4f3f_41bc_bbc8_45dc50c1996f.slice/crio-96749d399bb56d8ce0412a82f542361cdb79dda755aff5d9429a2e619d42c4b3 WatchSource:0}: Error finding container 96749d399bb56d8ce0412a82f542361cdb79dda755aff5d9429a2e619d42c4b3: Status 404 returned error can't find the container with id 96749d399bb56d8ce0412a82f542361cdb79dda755aff5d9429a2e619d42c4b3 Jan 21 15:54:45 crc kubenswrapper[4890]: I0121 15:54:45.925984 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7882bed4-6915-422f-a653-c6f841363752" path="/var/lib/kubelet/pods/7882bed4-6915-422f-a653-c6f841363752/volumes" Jan 21 15:54:46 crc kubenswrapper[4890]: I0121 15:54:46.230562 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerStarted","Data":"96749d399bb56d8ce0412a82f542361cdb79dda755aff5d9429a2e619d42c4b3"} Jan 21 15:54:48 crc kubenswrapper[4890]: I0121 15:54:48.252297 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerStarted","Data":"7017c19d128965e3ea5519acb897d64a4a0186fd9ac0a8111145505811b71486"} Jan 21 15:54:48 crc kubenswrapper[4890]: I0121 15:54:48.252915 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerStarted","Data":"2473a0b336b6af59b0d120e3f5c762ac26eeffce33827dcc1281e5e5a77e513c"} Jan 21 15:54:49 crc kubenswrapper[4890]: I0121 15:54:49.265436 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerStarted","Data":"2968880d67f8d80e261c0cd1700a0c0f861afc00fa584cc1cbe344bcbb428a78"} Jan 21 15:54:51 crc kubenswrapper[4890]: I0121 15:54:51.301694 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerStarted","Data":"37b77d106c7ced19bd7bf74229155a9d719c70a01e1d82b242fa7a0b06b96797"} Jan 21 15:54:51 crc kubenswrapper[4890]: I0121 15:54:51.302153 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:54:51 crc kubenswrapper[4890]: I0121 15:54:51.336116 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5074741510000003 podStartE2EDuration="7.336098177s" podCreationTimestamp="2026-01-21 15:54:44 +0000 UTC" firstStartedPulling="2026-01-21 15:54:45.387969833 +0000 UTC m=+1367.749412242" lastFinishedPulling="2026-01-21 15:54:50.216593859 +0000 UTC m=+1372.578036268" observedRunningTime="2026-01-21 15:54:51.329128513 +0000 UTC m=+1373.690570952" watchObservedRunningTime="2026-01-21 15:54:51.336098177 +0000 UTC m=+1373.697540586" Jan 21 15:54:55 crc kubenswrapper[4890]: I0121 15:54:55.362125 4890 generic.go:334] "Generic (PLEG): container finished" podID="581dea93-d2d2-45fd-9b38-c0829c031b5c" containerID="b97e050ad2a00d179937618daed6f81f2c163cafb139936cbf8b07d6cb0ad28f" exitCode=0 Jan 21 15:54:55 crc kubenswrapper[4890]: I0121 15:54:55.362211 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" event={"ID":"581dea93-d2d2-45fd-9b38-c0829c031b5c","Type":"ContainerDied","Data":"b97e050ad2a00d179937618daed6f81f2c163cafb139936cbf8b07d6cb0ad28f"} Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.731255 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.776597 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-scripts\") pod \"581dea93-d2d2-45fd-9b38-c0829c031b5c\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.777563 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-combined-ca-bundle\") pod \"581dea93-d2d2-45fd-9b38-c0829c031b5c\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.777695 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-config-data\") pod \"581dea93-d2d2-45fd-9b38-c0829c031b5c\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.778242 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js8sd\" (UniqueName: \"kubernetes.io/projected/581dea93-d2d2-45fd-9b38-c0829c031b5c-kube-api-access-js8sd\") pod \"581dea93-d2d2-45fd-9b38-c0829c031b5c\" (UID: \"581dea93-d2d2-45fd-9b38-c0829c031b5c\") " Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.785496 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-scripts" (OuterVolumeSpecName: "scripts") pod "581dea93-d2d2-45fd-9b38-c0829c031b5c" (UID: "581dea93-d2d2-45fd-9b38-c0829c031b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.796123 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581dea93-d2d2-45fd-9b38-c0829c031b5c-kube-api-access-js8sd" (OuterVolumeSpecName: "kube-api-access-js8sd") pod "581dea93-d2d2-45fd-9b38-c0829c031b5c" (UID: "581dea93-d2d2-45fd-9b38-c0829c031b5c"). InnerVolumeSpecName "kube-api-access-js8sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.814552 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "581dea93-d2d2-45fd-9b38-c0829c031b5c" (UID: "581dea93-d2d2-45fd-9b38-c0829c031b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.814621 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-config-data" (OuterVolumeSpecName: "config-data") pod "581dea93-d2d2-45fd-9b38-c0829c031b5c" (UID: "581dea93-d2d2-45fd-9b38-c0829c031b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.881175 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.881205 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.881219 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581dea93-d2d2-45fd-9b38-c0829c031b5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:56 crc kubenswrapper[4890]: I0121 15:54:56.881228 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js8sd\" (UniqueName: \"kubernetes.io/projected/581dea93-d2d2-45fd-9b38-c0829c031b5c-kube-api-access-js8sd\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.383746 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" event={"ID":"581dea93-d2d2-45fd-9b38-c0829c031b5c","Type":"ContainerDied","Data":"b44653e33cd05efedaff5e18818184c73bdafcdb5aefcfabff6e52a366a1dc02"} Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.383791 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b44653e33cd05efedaff5e18818184c73bdafcdb5aefcfabff6e52a366a1dc02" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.383806 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmzvq" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.501009 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:54:57 crc kubenswrapper[4890]: E0121 15:54:57.501511 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581dea93-d2d2-45fd-9b38-c0829c031b5c" containerName="nova-cell0-conductor-db-sync" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.501531 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="581dea93-d2d2-45fd-9b38-c0829c031b5c" containerName="nova-cell0-conductor-db-sync" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.501781 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="581dea93-d2d2-45fd-9b38-c0829c031b5c" containerName="nova-cell0-conductor-db-sync" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.502654 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.505284 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cn85b" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.505339 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.511535 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.595175 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h982\" (UniqueName: \"kubernetes.io/projected/50c99515-8e62-4e54-9ffc-e9294db2dc4f-kube-api-access-8h982\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.595305 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.595536 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.697029 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.697152 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h982\" (UniqueName: \"kubernetes.io/projected/50c99515-8e62-4e54-9ffc-e9294db2dc4f-kube-api-access-8h982\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.697223 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.702624 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.708879 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.715179 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h982\" (UniqueName: \"kubernetes.io/projected/50c99515-8e62-4e54-9ffc-e9294db2dc4f-kube-api-access-8h982\") pod \"nova-cell0-conductor-0\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.913705 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cn85b" Jan 21 15:54:57 crc kubenswrapper[4890]: I0121 15:54:57.919049 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:58 crc kubenswrapper[4890]: I0121 15:54:58.413140 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:54:59 crc kubenswrapper[4890]: I0121 15:54:59.417933 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50c99515-8e62-4e54-9ffc-e9294db2dc4f","Type":"ContainerStarted","Data":"90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4"} Jan 21 15:54:59 crc kubenswrapper[4890]: I0121 15:54:59.418544 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 15:54:59 crc kubenswrapper[4890]: I0121 15:54:59.418566 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50c99515-8e62-4e54-9ffc-e9294db2dc4f","Type":"ContainerStarted","Data":"000aae2b2f59e6d8873edd49f49e70ab03a73a68859e8f9d5a80519c1e410d65"} Jan 21 15:54:59 crc kubenswrapper[4890]: I0121 15:54:59.441988 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.441963275 podStartE2EDuration="2.441963275s" podCreationTimestamp="2026-01-21 15:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:54:59.433687829 +0000 UTC m=+1381.795130248" watchObservedRunningTime="2026-01-21 15:54:59.441963275 +0000 UTC m=+1381.803405684" Jan 21 15:55:07 crc kubenswrapper[4890]: I0121 15:55:07.950819 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.400120 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-frrhq"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.401818 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.418458 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.419112 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.426654 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-frrhq"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.504584 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.504647 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndhx\" (UniqueName: \"kubernetes.io/projected/34c20b0a-f576-475e-846d-75442d91073d-kube-api-access-lndhx\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.504678 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-scripts\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.505201 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-config-data\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.561138 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.563524 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.569887 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.583666 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.614270 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-config-data\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.614392 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.614418 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lndhx\" (UniqueName: \"kubernetes.io/projected/34c20b0a-f576-475e-846d-75442d91073d-kube-api-access-lndhx\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.614438 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-scripts\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.614521 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.614540 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcg4q\" (UniqueName: \"kubernetes.io/projected/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-kube-api-access-dcg4q\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.614595 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.621735 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-config-data\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.623162 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.632393 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-scripts\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.662874 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.664694 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.669874 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.683914 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lndhx\" (UniqueName: \"kubernetes.io/projected/34c20b0a-f576-475e-846d-75442d91073d-kube-api-access-lndhx\") pod \"nova-cell0-cell-mapping-frrhq\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.697025 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.716497 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.716536 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcg4q\" (UniqueName: \"kubernetes.io/projected/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-kube-api-access-dcg4q\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.716558 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48k5\" (UniqueName: \"kubernetes.io/projected/bf0a105e-531b-4660-8345-60756d713a37-kube-api-access-p48k5\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.716586 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0a105e-531b-4660-8345-60756d713a37-logs\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.716626 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.716650 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.716720 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-config-data\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.747888 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.749160 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.757562 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.760759 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.761330 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.773048 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.775869 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcg4q\" (UniqueName: \"kubernetes.io/projected/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-kube-api-access-dcg4q\") pod \"nova-cell1-novncproxy-0\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.818433 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.818473 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.818511 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgmh\" (UniqueName: \"kubernetes.io/projected/fb02425b-8695-4932-9d41-ecd4c7e13a0e-kube-api-access-vdgmh\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.818585 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-config-data\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.818614 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.818652 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p48k5\" (UniqueName: \"kubernetes.io/projected/bf0a105e-531b-4660-8345-60756d713a37-kube-api-access-p48k5\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.818677 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0a105e-531b-4660-8345-60756d713a37-logs\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.819429 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0a105e-531b-4660-8345-60756d713a37-logs\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.842081 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.842141 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.843266 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-config-data\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.861076 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p48k5\" (UniqueName: \"kubernetes.io/projected/bf0a105e-531b-4660-8345-60756d713a37-kube-api-access-p48k5\") pod \"nova-metadata-0\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.893158 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.928936 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdgmh\" (UniqueName: \"kubernetes.io/projected/fb02425b-8695-4932-9d41-ecd4c7e13a0e-kube-api-access-vdgmh\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.929620 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.930246 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.935637 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.938075 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.948179 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.961131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdgmh\" (UniqueName: \"kubernetes.io/projected/fb02425b-8695-4932-9d41-ecd4c7e13a0e-kube-api-access-vdgmh\") pod \"nova-scheduler-0\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.970395 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.992309 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.992426 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:08 crc kubenswrapper[4890]: I0121 15:55:08.994502 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.008397 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-cvh5s"] Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.012276 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.024066 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-cvh5s"] Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.032476 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.032519 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-config-data\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.032706 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slvf2\" (UniqueName: \"kubernetes.io/projected/db9e685f-c1ae-4780-b678-82ca547207b1-kube-api-access-slvf2\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.032920 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9e685f-c1ae-4780-b678-82ca547207b1-logs\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.134864 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9e685f-c1ae-4780-b678-82ca547207b1-logs\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.134934 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.134975 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.135039 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjbr\" (UniqueName: \"kubernetes.io/projected/55162589-99e0-4b08-931e-79b4cb40b318-kube-api-access-dbjbr\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.135129 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-config\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.135157 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.135184 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-config-data\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.135238 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.135264 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slvf2\" (UniqueName: \"kubernetes.io/projected/db9e685f-c1ae-4780-b678-82ca547207b1-kube-api-access-slvf2\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.135309 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.136223 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9e685f-c1ae-4780-b678-82ca547207b1-logs\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.142773 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.155043 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-config-data\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.161167 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slvf2\" (UniqueName: \"kubernetes.io/projected/db9e685f-c1ae-4780-b678-82ca547207b1-kube-api-access-slvf2\") pod \"nova-api-0\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.240440 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-config\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.240516 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.240557 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.240608 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.240634 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.240670 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbjbr\" (UniqueName: \"kubernetes.io/projected/55162589-99e0-4b08-931e-79b4cb40b318-kube-api-access-dbjbr\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.242162 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-config\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.242878 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.243556 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.244100 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.244674 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.249585 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.258021 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbjbr\" (UniqueName: \"kubernetes.io/projected/55162589-99e0-4b08-931e-79b4cb40b318-kube-api-access-dbjbr\") pod \"dnsmasq-dns-557bbc7df7-cvh5s\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.356802 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.357786 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.362808 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-frrhq"] Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.516596 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frrhq" event={"ID":"34c20b0a-f576-475e-846d-75442d91073d","Type":"ContainerStarted","Data":"076ea606d0272ea00602daaa2b658ad52fa5ab798794bb867cccf02aad6427aa"} Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.551322 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:09 crc kubenswrapper[4890]: W0121 15:55:09.556814 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0a105e_531b_4660_8345_60756d713a37.slice/crio-95d07f5aef718d00434be61012470d953db17aff9fe79bfc2610166fffecd905 WatchSource:0}: Error finding container 95d07f5aef718d00434be61012470d953db17aff9fe79bfc2610166fffecd905: Status 404 returned error can't find the container with id 95d07f5aef718d00434be61012470d953db17aff9fe79bfc2610166fffecd905 Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.620133 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.679366 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kcplg"] Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.680582 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.683289 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.683487 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.697699 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kcplg"] Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.752131 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-scripts\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.752229 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-config-data\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.752260 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v87\" (UniqueName: \"kubernetes.io/projected/bb2e2a4d-9099-4d00-9f68-cd52b6566215-kube-api-access-p4v87\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.752290 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.831415 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.853876 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-config-data\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.853927 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4v87\" (UniqueName: \"kubernetes.io/projected/bb2e2a4d-9099-4d00-9f68-cd52b6566215-kube-api-access-p4v87\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.853962 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.854063 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-scripts\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.862482 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-scripts\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.862796 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-config-data\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.863250 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:09 crc kubenswrapper[4890]: I0121 15:55:09.880791 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4v87\" (UniqueName: \"kubernetes.io/projected/bb2e2a4d-9099-4d00-9f68-cd52b6566215-kube-api-access-p4v87\") pod \"nova-cell1-conductor-db-sync-kcplg\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.007758 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-cvh5s"] Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.015900 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:10 crc kubenswrapper[4890]: W0121 15:55:10.018904 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55162589_99e0_4b08_931e_79b4cb40b318.slice/crio-3b3436a483ede244614b434b7e84ded2ba893fe7213c7c52f09d82ca5f621672 WatchSource:0}: Error finding container 3b3436a483ede244614b434b7e84ded2ba893fe7213c7c52f09d82ca5f621672: Status 404 returned error can't find the container with id 3b3436a483ede244614b434b7e84ded2ba893fe7213c7c52f09d82ca5f621672 Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.065288 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.527376 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2a6623bf-ad19-4e29-84aa-d16fc10b29a3","Type":"ContainerStarted","Data":"959d7db2d32a74f4a777d61ca0e7fce7a1949d5005241dc50f2b6ac7666745f2"} Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.529263 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db9e685f-c1ae-4780-b678-82ca547207b1","Type":"ContainerStarted","Data":"fc060f265d426b37afe36af17ef5ba5bfd71fd812f6b8568b4115f9620af1653"} Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.530937 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb02425b-8695-4932-9d41-ecd4c7e13a0e","Type":"ContainerStarted","Data":"d8b3a7b245c259865f51df2ad6163817882cfde4b6dcc925492176a669eae627"} Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.532266 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0a105e-531b-4660-8345-60756d713a37","Type":"ContainerStarted","Data":"95d07f5aef718d00434be61012470d953db17aff9fe79bfc2610166fffecd905"} Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.533961 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frrhq" event={"ID":"34c20b0a-f576-475e-846d-75442d91073d","Type":"ContainerStarted","Data":"10c72ed0b55323e6e81eb28fdd4bd49dbab1b1a9fb1044a4284d618c6d81f405"} Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.535587 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" event={"ID":"55162589-99e0-4b08-931e-79b4cb40b318","Type":"ContainerStarted","Data":"720dbad3a9e03a9a90a2cf1f8c0cb55417dccb2355a038f87ec98af0aa642518"} Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.535707 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" event={"ID":"55162589-99e0-4b08-931e-79b4cb40b318","Type":"ContainerStarted","Data":"3b3436a483ede244614b434b7e84ded2ba893fe7213c7c52f09d82ca5f621672"} Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.572655 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-frrhq" podStartSLOduration=2.572633282 podStartE2EDuration="2.572633282s" podCreationTimestamp="2026-01-21 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:10.549596898 +0000 UTC m=+1392.911039327" watchObservedRunningTime="2026-01-21 15:55:10.572633282 +0000 UTC m=+1392.934075691" Jan 21 15:55:10 crc kubenswrapper[4890]: I0121 15:55:10.772487 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kcplg"] Jan 21 15:55:11 crc kubenswrapper[4890]: I0121 15:55:11.556752 4890 generic.go:334] "Generic (PLEG): container finished" podID="55162589-99e0-4b08-931e-79b4cb40b318" containerID="720dbad3a9e03a9a90a2cf1f8c0cb55417dccb2355a038f87ec98af0aa642518" exitCode=0 Jan 21 15:55:11 crc kubenswrapper[4890]: I0121 15:55:11.557169 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" event={"ID":"55162589-99e0-4b08-931e-79b4cb40b318","Type":"ContainerDied","Data":"720dbad3a9e03a9a90a2cf1f8c0cb55417dccb2355a038f87ec98af0aa642518"} Jan 21 15:55:11 crc kubenswrapper[4890]: I0121 15:55:11.575431 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kcplg" event={"ID":"bb2e2a4d-9099-4d00-9f68-cd52b6566215","Type":"ContainerStarted","Data":"ac959b1ac528e4de2e66685bb7abda5d333740849f8b9ca6b1161716bbc68588"} Jan 21 15:55:11 crc kubenswrapper[4890]: I0121 15:55:11.575511 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kcplg" event={"ID":"bb2e2a4d-9099-4d00-9f68-cd52b6566215","Type":"ContainerStarted","Data":"511daf357604aca908930ccd983ada10696fea4d18532dd9b7110e48d7dd8c20"} Jan 21 15:55:11 crc kubenswrapper[4890]: I0121 15:55:11.616599 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-kcplg" podStartSLOduration=2.6165800790000002 podStartE2EDuration="2.616580079s" podCreationTimestamp="2026-01-21 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:11.604899578 +0000 UTC m=+1393.966341987" watchObservedRunningTime="2026-01-21 15:55:11.616580079 +0000 UTC m=+1393.978022488" Jan 21 15:55:12 crc kubenswrapper[4890]: I0121 15:55:12.608112 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:12 crc kubenswrapper[4890]: I0121 15:55:12.627522 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:14 crc kubenswrapper[4890]: I0121 15:55:14.922277 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.634608 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0a105e-531b-4660-8345-60756d713a37","Type":"ContainerStarted","Data":"cf2b3564d2fb6cbf8c919fada2266d79c98d5eb16c4de22ffcd35a4912f8209d"} Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.635116 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0a105e-531b-4660-8345-60756d713a37","Type":"ContainerStarted","Data":"f39563c312c0b4204e2242592476dc699262c4e9e8cf2c74eb79911336a3ad53"} Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.634738 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-metadata" containerID="cri-o://cf2b3564d2fb6cbf8c919fada2266d79c98d5eb16c4de22ffcd35a4912f8209d" gracePeriod=30 Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.634673 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-log" containerID="cri-o://f39563c312c0b4204e2242592476dc699262c4e9e8cf2c74eb79911336a3ad53" gracePeriod=30 Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.640282 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" event={"ID":"55162589-99e0-4b08-931e-79b4cb40b318","Type":"ContainerStarted","Data":"dae81e651eb0a134bbb1825593c87bc347567b78033ee2ad43a2b417009a5490"} Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.640451 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.647000 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2a6623bf-ad19-4e29-84aa-d16fc10b29a3","Type":"ContainerStarted","Data":"f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80"} Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.647124 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2a6623bf-ad19-4e29-84aa-d16fc10b29a3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80" gracePeriod=30 Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.649218 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db9e685f-c1ae-4780-b678-82ca547207b1","Type":"ContainerStarted","Data":"ed1727f41b6a25356f26403270982000ad0310e419878a73d965b4a85da6db26"} Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.649289 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db9e685f-c1ae-4780-b678-82ca547207b1","Type":"ContainerStarted","Data":"637de34232c5eaa309845cd6c25b6d75489ab3d96c0fb5511356c65eb3487a78"} Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.651263 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb02425b-8695-4932-9d41-ecd4c7e13a0e","Type":"ContainerStarted","Data":"6050c6b53fd24e4dae30edf731d977760fa68c44449652684c6ca1a444b5e755"} Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.663722 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.66385795 podStartE2EDuration="7.663699458s" podCreationTimestamp="2026-01-21 15:55:08 +0000 UTC" firstStartedPulling="2026-01-21 15:55:09.559251984 +0000 UTC m=+1391.920694393" lastFinishedPulling="2026-01-21 15:55:14.559093472 +0000 UTC m=+1396.920535901" observedRunningTime="2026-01-21 15:55:15.656793546 +0000 UTC m=+1398.018235955" watchObservedRunningTime="2026-01-21 15:55:15.663699458 +0000 UTC m=+1398.025141877" Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.682053 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.946697928 podStartE2EDuration="7.682033404s" podCreationTimestamp="2026-01-21 15:55:08 +0000 UTC" firstStartedPulling="2026-01-21 15:55:09.843691152 +0000 UTC m=+1392.205133571" lastFinishedPulling="2026-01-21 15:55:14.579026638 +0000 UTC m=+1396.940469047" observedRunningTime="2026-01-21 15:55:15.674969889 +0000 UTC m=+1398.036412298" watchObservedRunningTime="2026-01-21 15:55:15.682033404 +0000 UTC m=+1398.043475813" Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.700471 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.1339929 podStartE2EDuration="7.70034151s" podCreationTimestamp="2026-01-21 15:55:08 +0000 UTC" firstStartedPulling="2026-01-21 15:55:10.015155149 +0000 UTC m=+1392.376597558" lastFinishedPulling="2026-01-21 15:55:14.581503759 +0000 UTC m=+1396.942946168" observedRunningTime="2026-01-21 15:55:15.693857959 +0000 UTC m=+1398.055300368" watchObservedRunningTime="2026-01-21 15:55:15.70034151 +0000 UTC m=+1398.061783909" Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.727647 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.792932733 podStartE2EDuration="7.727621839s" podCreationTimestamp="2026-01-21 15:55:08 +0000 UTC" firstStartedPulling="2026-01-21 15:55:09.645170262 +0000 UTC m=+1392.006612661" lastFinishedPulling="2026-01-21 15:55:14.579859338 +0000 UTC m=+1396.941301767" observedRunningTime="2026-01-21 15:55:15.720515112 +0000 UTC m=+1398.081957521" watchObservedRunningTime="2026-01-21 15:55:15.727621839 +0000 UTC m=+1398.089064248" Jan 21 15:55:15 crc kubenswrapper[4890]: I0121 15:55:15.752586 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" podStartSLOduration=7.752554619 podStartE2EDuration="7.752554619s" podCreationTimestamp="2026-01-21 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:15.748647102 +0000 UTC m=+1398.110089531" watchObservedRunningTime="2026-01-21 15:55:15.752554619 +0000 UTC m=+1398.113997028" Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.669114 4890 generic.go:334] "Generic (PLEG): container finished" podID="bf0a105e-531b-4660-8345-60756d713a37" containerID="cf2b3564d2fb6cbf8c919fada2266d79c98d5eb16c4de22ffcd35a4912f8209d" exitCode=0 Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.669293 4890 generic.go:334] "Generic (PLEG): container finished" podID="bf0a105e-531b-4660-8345-60756d713a37" containerID="f39563c312c0b4204e2242592476dc699262c4e9e8cf2c74eb79911336a3ad53" exitCode=143 Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.669207 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0a105e-531b-4660-8345-60756d713a37","Type":"ContainerDied","Data":"cf2b3564d2fb6cbf8c919fada2266d79c98d5eb16c4de22ffcd35a4912f8209d"} Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.670448 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0a105e-531b-4660-8345-60756d713a37","Type":"ContainerDied","Data":"f39563c312c0b4204e2242592476dc699262c4e9e8cf2c74eb79911336a3ad53"} Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.896060 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.949478 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p48k5\" (UniqueName: \"kubernetes.io/projected/bf0a105e-531b-4660-8345-60756d713a37-kube-api-access-p48k5\") pod \"bf0a105e-531b-4660-8345-60756d713a37\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.949554 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0a105e-531b-4660-8345-60756d713a37-logs\") pod \"bf0a105e-531b-4660-8345-60756d713a37\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.949605 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-config-data\") pod \"bf0a105e-531b-4660-8345-60756d713a37\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.949684 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-combined-ca-bundle\") pod \"bf0a105e-531b-4660-8345-60756d713a37\" (UID: \"bf0a105e-531b-4660-8345-60756d713a37\") " Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.951981 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0a105e-531b-4660-8345-60756d713a37-logs" (OuterVolumeSpecName: "logs") pod "bf0a105e-531b-4660-8345-60756d713a37" (UID: "bf0a105e-531b-4660-8345-60756d713a37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.956524 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0a105e-531b-4660-8345-60756d713a37-kube-api-access-p48k5" (OuterVolumeSpecName: "kube-api-access-p48k5") pod "bf0a105e-531b-4660-8345-60756d713a37" (UID: "bf0a105e-531b-4660-8345-60756d713a37"). InnerVolumeSpecName "kube-api-access-p48k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:16 crc kubenswrapper[4890]: I0121 15:55:16.986491 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf0a105e-531b-4660-8345-60756d713a37" (UID: "bf0a105e-531b-4660-8345-60756d713a37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.006665 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-config-data" (OuterVolumeSpecName: "config-data") pod "bf0a105e-531b-4660-8345-60756d713a37" (UID: "bf0a105e-531b-4660-8345-60756d713a37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.052613 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.052650 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p48k5\" (UniqueName: \"kubernetes.io/projected/bf0a105e-531b-4660-8345-60756d713a37-kube-api-access-p48k5\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.052661 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0a105e-531b-4660-8345-60756d713a37-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.052670 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0a105e-531b-4660-8345-60756d713a37-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.681108 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0a105e-531b-4660-8345-60756d713a37","Type":"ContainerDied","Data":"95d07f5aef718d00434be61012470d953db17aff9fe79bfc2610166fffecd905"} Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.681172 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.681327 4890 scope.go:117] "RemoveContainer" containerID="cf2b3564d2fb6cbf8c919fada2266d79c98d5eb16c4de22ffcd35a4912f8209d" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.714479 4890 scope.go:117] "RemoveContainer" containerID="f39563c312c0b4204e2242592476dc699262c4e9e8cf2c74eb79911336a3ad53" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.762577 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.795995 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.807718 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:17 crc kubenswrapper[4890]: E0121 15:55:17.808155 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-log" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.808172 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-log" Jan 21 15:55:17 crc kubenswrapper[4890]: E0121 15:55:17.808185 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-metadata" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.808192 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-metadata" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.808414 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-log" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.808434 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0a105e-531b-4660-8345-60756d713a37" containerName="nova-metadata-metadata" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.809396 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.811514 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.812167 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.832639 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.865761 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk66t\" (UniqueName: \"kubernetes.io/projected/196adbf1-176b-4741-b60d-97ec6f3473d5-kube-api-access-fk66t\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.865842 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.865994 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/196adbf1-176b-4741-b60d-97ec6f3473d5-logs\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.866079 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.866527 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-config-data\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.934672 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0a105e-531b-4660-8345-60756d713a37" path="/var/lib/kubelet/pods/bf0a105e-531b-4660-8345-60756d713a37/volumes" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.968640 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.968696 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/196adbf1-176b-4741-b60d-97ec6f3473d5-logs\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.968724 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.968806 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-config-data\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.968878 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk66t\" (UniqueName: \"kubernetes.io/projected/196adbf1-176b-4741-b60d-97ec6f3473d5-kube-api-access-fk66t\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.970089 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/196adbf1-176b-4741-b60d-97ec6f3473d5-logs\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.974551 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.976112 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-config-data\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.981469 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:17 crc kubenswrapper[4890]: I0121 15:55:17.991142 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk66t\" (UniqueName: \"kubernetes.io/projected/196adbf1-176b-4741-b60d-97ec6f3473d5-kube-api-access-fk66t\") pod \"nova-metadata-0\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " pod="openstack/nova-metadata-0" Jan 21 15:55:18 crc kubenswrapper[4890]: I0121 15:55:18.129165 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:18 crc kubenswrapper[4890]: I0121 15:55:18.628959 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:18 crc kubenswrapper[4890]: W0121 15:55:18.633274 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod196adbf1_176b_4741_b60d_97ec6f3473d5.slice/crio-cb013b2a3454c6222b446efdce07d202da87d5a8f768107bbe52b70308d0badc WatchSource:0}: Error finding container cb013b2a3454c6222b446efdce07d202da87d5a8f768107bbe52b70308d0badc: Status 404 returned error can't find the container with id cb013b2a3454c6222b446efdce07d202da87d5a8f768107bbe52b70308d0badc Jan 21 15:55:18 crc kubenswrapper[4890]: I0121 15:55:18.691330 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"196adbf1-176b-4741-b60d-97ec6f3473d5","Type":"ContainerStarted","Data":"cb013b2a3454c6222b446efdce07d202da87d5a8f768107bbe52b70308d0badc"} Jan 21 15:55:18 crc kubenswrapper[4890]: I0121 15:55:18.762422 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:55:18 crc kubenswrapper[4890]: I0121 15:55:18.762477 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:55:18 crc kubenswrapper[4890]: I0121 15:55:18.894110 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.250571 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.250864 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.285294 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.357743 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.357980 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.362758 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.455385 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-z8r2k"] Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.455733 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" podUID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerName="dnsmasq-dns" containerID="cri-o://184be0c67756e6f99f1d9e6bec269882a479d8f46ce0ae8cdbd971f416738626" gracePeriod=10 Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.609791 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.610316 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bd6174ac-18d9-49c7-9c25-ad75ca3a2d97" containerName="kube-state-metrics" containerID="cri-o://b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098" gracePeriod=30 Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.718561 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"196adbf1-176b-4741-b60d-97ec6f3473d5","Type":"ContainerStarted","Data":"5eb32a28eaf299b8245ba3db2ed3c5cb0cf51ca105306bf146ad44a86bb86fcd"} Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.718614 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"196adbf1-176b-4741-b60d-97ec6f3473d5","Type":"ContainerStarted","Data":"79eb417d35f9ec8b4598f71f855cb04190e42424db96a2850f7305b4b9e32d4b"} Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.725269 4890 generic.go:334] "Generic (PLEG): container finished" podID="34c20b0a-f576-475e-846d-75442d91073d" containerID="10c72ed0b55323e6e81eb28fdd4bd49dbab1b1a9fb1044a4284d618c6d81f405" exitCode=0 Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.725410 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frrhq" event={"ID":"34c20b0a-f576-475e-846d-75442d91073d","Type":"ContainerDied","Data":"10c72ed0b55323e6e81eb28fdd4bd49dbab1b1a9fb1044a4284d618c6d81f405"} Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.730386 4890 generic.go:334] "Generic (PLEG): container finished" podID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerID="184be0c67756e6f99f1d9e6bec269882a479d8f46ce0ae8cdbd971f416738626" exitCode=0 Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.731303 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" event={"ID":"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130","Type":"ContainerDied","Data":"184be0c67756e6f99f1d9e6bec269882a479d8f46ce0ae8cdbd971f416738626"} Jan 21 15:55:19 crc kubenswrapper[4890]: I0121 15:55:19.798019 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.120792 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.219567 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsb2f\" (UniqueName: \"kubernetes.io/projected/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-kube-api-access-jsb2f\") pod \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.219750 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-nb\") pod \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.219821 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-swift-storage-0\") pod \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.219860 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-config\") pod \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.219988 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-svc\") pod \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.220025 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-sb\") pod \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\" (UID: \"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130\") " Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.268648 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-kube-api-access-jsb2f" (OuterVolumeSpecName: "kube-api-access-jsb2f") pod "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" (UID: "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130"). InnerVolumeSpecName "kube-api-access-jsb2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.339733 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsb2f\" (UniqueName: \"kubernetes.io/projected/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-kube-api-access-jsb2f\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.355129 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" (UID: "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.399999 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" (UID: "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.445017 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" (UID: "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.446583 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.446615 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.446625 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.454200 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.454609 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.509137 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-config" (OuterVolumeSpecName: "config") pod "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" (UID: "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.548188 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.548812 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.549015 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" (UID: "78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.650406 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zr52\" (UniqueName: \"kubernetes.io/projected/bd6174ac-18d9-49c7-9c25-ad75ca3a2d97-kube-api-access-9zr52\") pod \"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97\" (UID: \"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97\") " Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.652800 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.661526 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6174ac-18d9-49c7-9c25-ad75ca3a2d97-kube-api-access-9zr52" (OuterVolumeSpecName: "kube-api-access-9zr52") pod "bd6174ac-18d9-49c7-9c25-ad75ca3a2d97" (UID: "bd6174ac-18d9-49c7-9c25-ad75ca3a2d97"). InnerVolumeSpecName "kube-api-access-9zr52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.739940 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.741452 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-z8r2k" event={"ID":"78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130","Type":"ContainerDied","Data":"4c64c8f44536d63ad005a2fe0344b9b0d4501d6add3f6a034d77acc7367cc122"} Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.743759 4890 scope.go:117] "RemoveContainer" containerID="184be0c67756e6f99f1d9e6bec269882a479d8f46ce0ae8cdbd971f416738626" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.755697 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zr52\" (UniqueName: \"kubernetes.io/projected/bd6174ac-18d9-49c7-9c25-ad75ca3a2d97-kube-api-access-9zr52\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.766494 4890 generic.go:334] "Generic (PLEG): container finished" podID="bd6174ac-18d9-49c7-9c25-ad75ca3a2d97" containerID="b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098" exitCode=2 Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.766597 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.766718 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97","Type":"ContainerDied","Data":"b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098"} Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.766756 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd6174ac-18d9-49c7-9c25-ad75ca3a2d97","Type":"ContainerDied","Data":"88eaead50e63f3ba605c0cd3876e21faf17e07bf8f62b620e0ba5db7ae9235db"} Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.802961 4890 scope.go:117] "RemoveContainer" containerID="ba20004fef84efb5196818e15a0e66142dea7de3850f21b544cb88a0fc4da4ef" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.833269 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.833238198 podStartE2EDuration="3.833238198s" podCreationTimestamp="2026-01-21 15:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:20.799818417 +0000 UTC m=+1403.161260836" watchObservedRunningTime="2026-01-21 15:55:20.833238198 +0000 UTC m=+1403.194680637" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.848178 4890 scope.go:117] "RemoveContainer" containerID="b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.852405 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.864404 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.872850 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-z8r2k"] Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.883251 4890 scope.go:117] "RemoveContainer" containerID="b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.883811 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:55:20 crc kubenswrapper[4890]: E0121 15:55:20.884485 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerName="dnsmasq-dns" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.884508 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerName="dnsmasq-dns" Jan 21 15:55:20 crc kubenswrapper[4890]: E0121 15:55:20.884561 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerName="init" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.884571 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerName="init" Jan 21 15:55:20 crc kubenswrapper[4890]: E0121 15:55:20.884591 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6174ac-18d9-49c7-9c25-ad75ca3a2d97" containerName="kube-state-metrics" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.884600 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6174ac-18d9-49c7-9c25-ad75ca3a2d97" containerName="kube-state-metrics" Jan 21 15:55:20 crc kubenswrapper[4890]: E0121 15:55:20.884795 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098\": container with ID starting with b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098 not found: ID does not exist" containerID="b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.884849 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098"} err="failed to get container status \"b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098\": rpc error: code = NotFound desc = could not find container \"b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098\": container with ID starting with b372da42e26dde0c823af67e5f77875dd79b8df802965d4f93085deac2990098 not found: ID does not exist" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.884911 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" containerName="dnsmasq-dns" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.884931 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6174ac-18d9-49c7-9c25-ad75ca3a2d97" containerName="kube-state-metrics" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.885785 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.888700 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.893408 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.900433 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-z8r2k"] Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.914321 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.964196 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htgk6\" (UniqueName: \"kubernetes.io/projected/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-api-access-htgk6\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.964276 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.964342 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:20 crc kubenswrapper[4890]: I0121 15:55:20.964436 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.066185 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htgk6\" (UniqueName: \"kubernetes.io/projected/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-api-access-htgk6\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.066241 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.066300 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.066361 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.071100 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.071824 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.085892 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htgk6\" (UniqueName: \"kubernetes.io/projected/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-api-access-htgk6\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.088609 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.191163 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.226394 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.269612 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-combined-ca-bundle\") pod \"34c20b0a-f576-475e-846d-75442d91073d\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.270008 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lndhx\" (UniqueName: \"kubernetes.io/projected/34c20b0a-f576-475e-846d-75442d91073d-kube-api-access-lndhx\") pod \"34c20b0a-f576-475e-846d-75442d91073d\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.270035 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-config-data\") pod \"34c20b0a-f576-475e-846d-75442d91073d\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.270061 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-scripts\") pod \"34c20b0a-f576-475e-846d-75442d91073d\" (UID: \"34c20b0a-f576-475e-846d-75442d91073d\") " Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.274264 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-scripts" (OuterVolumeSpecName: "scripts") pod "34c20b0a-f576-475e-846d-75442d91073d" (UID: "34c20b0a-f576-475e-846d-75442d91073d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.280283 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c20b0a-f576-475e-846d-75442d91073d-kube-api-access-lndhx" (OuterVolumeSpecName: "kube-api-access-lndhx") pod "34c20b0a-f576-475e-846d-75442d91073d" (UID: "34c20b0a-f576-475e-846d-75442d91073d"). InnerVolumeSpecName "kube-api-access-lndhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.300370 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34c20b0a-f576-475e-846d-75442d91073d" (UID: "34c20b0a-f576-475e-846d-75442d91073d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.314808 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-config-data" (OuterVolumeSpecName: "config-data") pod "34c20b0a-f576-475e-846d-75442d91073d" (UID: "34c20b0a-f576-475e-846d-75442d91073d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.372379 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lndhx\" (UniqueName: \"kubernetes.io/projected/34c20b0a-f576-475e-846d-75442d91073d-kube-api-access-lndhx\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.372467 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.372477 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.372486 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c20b0a-f576-475e-846d-75442d91073d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.748892 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.783642 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.800461 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"736df6ca-1308-4f87-a39e-7aca6ad4d5a1","Type":"ContainerStarted","Data":"06b6392576ce5b1b073fa85b31665f25e5e5344c3929f976645fa39a084f41c0"} Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.802480 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frrhq" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.805157 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frrhq" event={"ID":"34c20b0a-f576-475e-846d-75442d91073d","Type":"ContainerDied","Data":"076ea606d0272ea00602daaa2b658ad52fa5ab798794bb867cccf02aad6427aa"} Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.805212 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="076ea606d0272ea00602daaa2b658ad52fa5ab798794bb867cccf02aad6427aa" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.928709 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130" path="/var/lib/kubelet/pods/78b1abb5-c7e6-49ae-8c9e-8c69f2cb5130/volumes" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.931730 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6174ac-18d9-49c7-9c25-ad75ca3a2d97" path="/var/lib/kubelet/pods/bd6174ac-18d9-49c7-9c25-ad75ca3a2d97/volumes" Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.963854 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.964141 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-log" containerID="cri-o://637de34232c5eaa309845cd6c25b6d75489ab3d96c0fb5511356c65eb3487a78" gracePeriod=30 Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.964673 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-api" containerID="cri-o://ed1727f41b6a25356f26403270982000ad0310e419878a73d965b4a85da6db26" gracePeriod=30 Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.992318 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:21 crc kubenswrapper[4890]: I0121 15:55:21.992655 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fb02425b-8695-4932-9d41-ecd4c7e13a0e" containerName="nova-scheduler-scheduler" containerID="cri-o://6050c6b53fd24e4dae30edf731d977760fa68c44449652684c6ca1a444b5e755" gracePeriod=30 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.007445 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.007648 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-log" containerID="cri-o://79eb417d35f9ec8b4598f71f855cb04190e42424db96a2850f7305b4b9e32d4b" gracePeriod=30 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.007784 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-metadata" containerID="cri-o://5eb32a28eaf299b8245ba3db2ed3c5cb0cf51ca105306bf146ad44a86bb86fcd" gracePeriod=30 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.433209 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.433938 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-central-agent" containerID="cri-o://2473a0b336b6af59b0d120e3f5c762ac26eeffce33827dcc1281e5e5a77e513c" gracePeriod=30 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.434004 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="proxy-httpd" containerID="cri-o://37b77d106c7ced19bd7bf74229155a9d719c70a01e1d82b242fa7a0b06b96797" gracePeriod=30 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.434015 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="sg-core" containerID="cri-o://2968880d67f8d80e261c0cd1700a0c0f861afc00fa584cc1cbe344bcbb428a78" gracePeriod=30 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.434084 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-notification-agent" containerID="cri-o://7017c19d128965e3ea5519acb897d64a4a0186fd9ac0a8111145505811b71486" gracePeriod=30 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.825098 4890 generic.go:334] "Generic (PLEG): container finished" podID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerID="5eb32a28eaf299b8245ba3db2ed3c5cb0cf51ca105306bf146ad44a86bb86fcd" exitCode=0 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.825127 4890 generic.go:334] "Generic (PLEG): container finished" podID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerID="79eb417d35f9ec8b4598f71f855cb04190e42424db96a2850f7305b4b9e32d4b" exitCode=143 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.825157 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"196adbf1-176b-4741-b60d-97ec6f3473d5","Type":"ContainerDied","Data":"5eb32a28eaf299b8245ba3db2ed3c5cb0cf51ca105306bf146ad44a86bb86fcd"} Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.825182 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"196adbf1-176b-4741-b60d-97ec6f3473d5","Type":"ContainerDied","Data":"79eb417d35f9ec8b4598f71f855cb04190e42424db96a2850f7305b4b9e32d4b"} Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.826535 4890 generic.go:334] "Generic (PLEG): container finished" podID="db9e685f-c1ae-4780-b678-82ca547207b1" containerID="637de34232c5eaa309845cd6c25b6d75489ab3d96c0fb5511356c65eb3487a78" exitCode=143 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.826574 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db9e685f-c1ae-4780-b678-82ca547207b1","Type":"ContainerDied","Data":"637de34232c5eaa309845cd6c25b6d75489ab3d96c0fb5511356c65eb3487a78"} Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.828724 4890 generic.go:334] "Generic (PLEG): container finished" podID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerID="37b77d106c7ced19bd7bf74229155a9d719c70a01e1d82b242fa7a0b06b96797" exitCode=0 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.828756 4890 generic.go:334] "Generic (PLEG): container finished" podID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerID="2968880d67f8d80e261c0cd1700a0c0f861afc00fa584cc1cbe344bcbb428a78" exitCode=2 Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.828775 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerDied","Data":"37b77d106c7ced19bd7bf74229155a9d719c70a01e1d82b242fa7a0b06b96797"} Jan 21 15:55:22 crc kubenswrapper[4890]: I0121 15:55:22.828800 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerDied","Data":"2968880d67f8d80e261c0cd1700a0c0f861afc00fa584cc1cbe344bcbb428a78"} Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.129430 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.129729 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.636960 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.732140 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-nova-metadata-tls-certs\") pod \"196adbf1-176b-4741-b60d-97ec6f3473d5\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.732316 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-combined-ca-bundle\") pod \"196adbf1-176b-4741-b60d-97ec6f3473d5\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.732381 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk66t\" (UniqueName: \"kubernetes.io/projected/196adbf1-176b-4741-b60d-97ec6f3473d5-kube-api-access-fk66t\") pod \"196adbf1-176b-4741-b60d-97ec6f3473d5\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.732421 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/196adbf1-176b-4741-b60d-97ec6f3473d5-logs\") pod \"196adbf1-176b-4741-b60d-97ec6f3473d5\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.732571 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-config-data\") pod \"196adbf1-176b-4741-b60d-97ec6f3473d5\" (UID: \"196adbf1-176b-4741-b60d-97ec6f3473d5\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.732916 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/196adbf1-176b-4741-b60d-97ec6f3473d5-logs" (OuterVolumeSpecName: "logs") pod "196adbf1-176b-4741-b60d-97ec6f3473d5" (UID: "196adbf1-176b-4741-b60d-97ec6f3473d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.733465 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/196adbf1-176b-4741-b60d-97ec6f3473d5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.783671 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "196adbf1-176b-4741-b60d-97ec6f3473d5" (UID: "196adbf1-176b-4741-b60d-97ec6f3473d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.795458 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196adbf1-176b-4741-b60d-97ec6f3473d5-kube-api-access-fk66t" (OuterVolumeSpecName: "kube-api-access-fk66t") pod "196adbf1-176b-4741-b60d-97ec6f3473d5" (UID: "196adbf1-176b-4741-b60d-97ec6f3473d5"). InnerVolumeSpecName "kube-api-access-fk66t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.795973 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-config-data" (OuterVolumeSpecName: "config-data") pod "196adbf1-176b-4741-b60d-97ec6f3473d5" (UID: "196adbf1-176b-4741-b60d-97ec6f3473d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.813563 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "196adbf1-176b-4741-b60d-97ec6f3473d5" (UID: "196adbf1-176b-4741-b60d-97ec6f3473d5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.834645 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.834673 4890 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.834685 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196adbf1-176b-4741-b60d-97ec6f3473d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.834695 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk66t\" (UniqueName: \"kubernetes.io/projected/196adbf1-176b-4741-b60d-97ec6f3473d5-kube-api-access-fk66t\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.837583 4890 generic.go:334] "Generic (PLEG): container finished" podID="fb02425b-8695-4932-9d41-ecd4c7e13a0e" containerID="6050c6b53fd24e4dae30edf731d977760fa68c44449652684c6ca1a444b5e755" exitCode=0 Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.837647 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb02425b-8695-4932-9d41-ecd4c7e13a0e","Type":"ContainerDied","Data":"6050c6b53fd24e4dae30edf731d977760fa68c44449652684c6ca1a444b5e755"} Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.839291 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"736df6ca-1308-4f87-a39e-7aca6ad4d5a1","Type":"ContainerStarted","Data":"612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54"} Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.840598 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.844085 4890 generic.go:334] "Generic (PLEG): container finished" podID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerID="2473a0b336b6af59b0d120e3f5c762ac26eeffce33827dcc1281e5e5a77e513c" exitCode=0 Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.844144 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerDied","Data":"2473a0b336b6af59b0d120e3f5c762ac26eeffce33827dcc1281e5e5a77e513c"} Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.845982 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"196adbf1-176b-4741-b60d-97ec6f3473d5","Type":"ContainerDied","Data":"cb013b2a3454c6222b446efdce07d202da87d5a8f768107bbe52b70308d0badc"} Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.846015 4890 scope.go:117] "RemoveContainer" containerID="5eb32a28eaf299b8245ba3db2ed3c5cb0cf51ca105306bf146ad44a86bb86fcd" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.846150 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.865558 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.2979286 podStartE2EDuration="3.865542244s" podCreationTimestamp="2026-01-21 15:55:20 +0000 UTC" firstStartedPulling="2026-01-21 15:55:21.748588446 +0000 UTC m=+1404.110030865" lastFinishedPulling="2026-01-21 15:55:22.3162021 +0000 UTC m=+1404.677644509" observedRunningTime="2026-01-21 15:55:23.863482883 +0000 UTC m=+1406.224925292" watchObservedRunningTime="2026-01-21 15:55:23.865542244 +0000 UTC m=+1406.226984653" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.890430 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.905640 4890 scope.go:117] "RemoveContainer" containerID="79eb417d35f9ec8b4598f71f855cb04190e42424db96a2850f7305b4b9e32d4b" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.935537 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-combined-ca-bundle\") pod \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.935754 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdgmh\" (UniqueName: \"kubernetes.io/projected/fb02425b-8695-4932-9d41-ecd4c7e13a0e-kube-api-access-vdgmh\") pod \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.935786 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data\") pod \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.945205 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb02425b-8695-4932-9d41-ecd4c7e13a0e-kube-api-access-vdgmh" (OuterVolumeSpecName: "kube-api-access-vdgmh") pod "fb02425b-8695-4932-9d41-ecd4c7e13a0e" (UID: "fb02425b-8695-4932-9d41-ecd4c7e13a0e"). InnerVolumeSpecName "kube-api-access-vdgmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971014 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971238 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971254 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:23 crc kubenswrapper[4890]: E0121 15:55:23.971620 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c20b0a-f576-475e-846d-75442d91073d" containerName="nova-manage" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971632 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c20b0a-f576-475e-846d-75442d91073d" containerName="nova-manage" Jan 21 15:55:23 crc kubenswrapper[4890]: E0121 15:55:23.971655 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-metadata" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971661 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-metadata" Jan 21 15:55:23 crc kubenswrapper[4890]: E0121 15:55:23.971676 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-log" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971682 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-log" Jan 21 15:55:23 crc kubenswrapper[4890]: E0121 15:55:23.971689 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb02425b-8695-4932-9d41-ecd4c7e13a0e" containerName="nova-scheduler-scheduler" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971695 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb02425b-8695-4932-9d41-ecd4c7e13a0e" containerName="nova-scheduler-scheduler" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971873 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-log" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971893 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" containerName="nova-metadata-metadata" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971899 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c20b0a-f576-475e-846d-75442d91073d" containerName="nova-manage" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.971910 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb02425b-8695-4932-9d41-ecd4c7e13a0e" containerName="nova-scheduler-scheduler" Jan 21 15:55:23 crc kubenswrapper[4890]: E0121 15:55:23.972318 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data podName:fb02425b-8695-4932-9d41-ecd4c7e13a0e nodeName:}" failed. No retries permitted until 2026-01-21 15:55:24.472296071 +0000 UTC m=+1406.833738480 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data") pod "fb02425b-8695-4932-9d41-ecd4c7e13a0e" (UID: "fb02425b-8695-4932-9d41-ecd4c7e13a0e") : error deleting /var/lib/kubelet/pods/fb02425b-8695-4932-9d41-ecd4c7e13a0e/volume-subpaths: remove /var/lib/kubelet/pods/fb02425b-8695-4932-9d41-ecd4c7e13a0e/volume-subpaths: no such file or directory Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.973063 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.975737 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.975959 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.984038 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:23 crc kubenswrapper[4890]: I0121 15:55:23.996506 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb02425b-8695-4932-9d41-ecd4c7e13a0e" (UID: "fb02425b-8695-4932-9d41-ecd4c7e13a0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.040132 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.040204 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9fv\" (UniqueName: \"kubernetes.io/projected/6845ac08-f194-417b-be65-16fa5d4fac41-kube-api-access-ft9fv\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.040229 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-config-data\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.040248 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.040290 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6845ac08-f194-417b-be65-16fa5d4fac41-logs\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.040336 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdgmh\" (UniqueName: \"kubernetes.io/projected/fb02425b-8695-4932-9d41-ecd4c7e13a0e-kube-api-access-vdgmh\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.040360 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.142460 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6845ac08-f194-417b-be65-16fa5d4fac41-logs\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.142581 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.142632 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9fv\" (UniqueName: \"kubernetes.io/projected/6845ac08-f194-417b-be65-16fa5d4fac41-kube-api-access-ft9fv\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.142654 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-config-data\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.142671 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.143594 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6845ac08-f194-417b-be65-16fa5d4fac41-logs\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.145707 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.146930 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-config-data\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.147028 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.161285 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9fv\" (UniqueName: \"kubernetes.io/projected/6845ac08-f194-417b-be65-16fa5d4fac41-kube-api-access-ft9fv\") pod \"nova-metadata-0\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.383450 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.549405 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data\") pod \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\" (UID: \"fb02425b-8695-4932-9d41-ecd4c7e13a0e\") " Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.555196 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data" (OuterVolumeSpecName: "config-data") pod "fb02425b-8695-4932-9d41-ecd4c7e13a0e" (UID: "fb02425b-8695-4932-9d41-ecd4c7e13a0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.652194 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb02425b-8695-4932-9d41-ecd4c7e13a0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.854716 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:55:24 crc kubenswrapper[4890]: W0121 15:55:24.857212 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6845ac08_f194_417b_be65_16fa5d4fac41.slice/crio-ed89285c51de340c69ea5ef04a8d253a6c4397374d9cbd23852b637ca3972a1c WatchSource:0}: Error finding container ed89285c51de340c69ea5ef04a8d253a6c4397374d9cbd23852b637ca3972a1c: Status 404 returned error can't find the container with id ed89285c51de340c69ea5ef04a8d253a6c4397374d9cbd23852b637ca3972a1c Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.861687 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.861732 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb02425b-8695-4932-9d41-ecd4c7e13a0e","Type":"ContainerDied","Data":"d8b3a7b245c259865f51df2ad6163817882cfde4b6dcc925492176a669eae627"} Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.861766 4890 scope.go:117] "RemoveContainer" containerID="6050c6b53fd24e4dae30edf731d977760fa68c44449652684c6ca1a444b5e755" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.925043 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.941833 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.952681 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.954037 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.958035 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:55:24 crc kubenswrapper[4890]: I0121 15:55:24.960904 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.058741 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.059101 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwmr\" (UniqueName: \"kubernetes.io/projected/38032245-27d5-4a93-998b-8fb378d98197-kube-api-access-nqwmr\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.059162 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-config-data\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.160536 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-config-data\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.160699 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.160753 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwmr\" (UniqueName: \"kubernetes.io/projected/38032245-27d5-4a93-998b-8fb378d98197-kube-api-access-nqwmr\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.164105 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.173038 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-config-data\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.177165 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwmr\" (UniqueName: \"kubernetes.io/projected/38032245-27d5-4a93-998b-8fb378d98197-kube-api-access-nqwmr\") pod \"nova-scheduler-0\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.276068 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.775246 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.877747 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6845ac08-f194-417b-be65-16fa5d4fac41","Type":"ContainerStarted","Data":"153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5"} Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.878579 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6845ac08-f194-417b-be65-16fa5d4fac41","Type":"ContainerStarted","Data":"e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd"} Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.878676 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6845ac08-f194-417b-be65-16fa5d4fac41","Type":"ContainerStarted","Data":"ed89285c51de340c69ea5ef04a8d253a6c4397374d9cbd23852b637ca3972a1c"} Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.879633 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38032245-27d5-4a93-998b-8fb378d98197","Type":"ContainerStarted","Data":"ef64cfd15cc24f5c01254f144263dd2ac9c6907779d022995f9b6a208060349c"} Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.925739 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196adbf1-176b-4741-b60d-97ec6f3473d5" path="/var/lib/kubelet/pods/196adbf1-176b-4741-b60d-97ec6f3473d5/volumes" Jan 21 15:55:25 crc kubenswrapper[4890]: I0121 15:55:25.926478 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb02425b-8695-4932-9d41-ecd4c7e13a0e" path="/var/lib/kubelet/pods/fb02425b-8695-4932-9d41-ecd4c7e13a0e/volumes" Jan 21 15:55:26 crc kubenswrapper[4890]: I0121 15:55:26.893210 4890 generic.go:334] "Generic (PLEG): container finished" podID="db9e685f-c1ae-4780-b678-82ca547207b1" containerID="ed1727f41b6a25356f26403270982000ad0310e419878a73d965b4a85da6db26" exitCode=0 Jan 21 15:55:26 crc kubenswrapper[4890]: I0121 15:55:26.893291 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db9e685f-c1ae-4780-b678-82ca547207b1","Type":"ContainerDied","Data":"ed1727f41b6a25356f26403270982000ad0310e419878a73d965b4a85da6db26"} Jan 21 15:55:26 crc kubenswrapper[4890]: I0121 15:55:26.897314 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38032245-27d5-4a93-998b-8fb378d98197","Type":"ContainerStarted","Data":"433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55"} Jan 21 15:55:26 crc kubenswrapper[4890]: I0121 15:55:26.901262 4890 generic.go:334] "Generic (PLEG): container finished" podID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerID="7017c19d128965e3ea5519acb897d64a4a0186fd9ac0a8111145505811b71486" exitCode=0 Jan 21 15:55:26 crc kubenswrapper[4890]: I0121 15:55:26.901300 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerDied","Data":"7017c19d128965e3ea5519acb897d64a4a0186fd9ac0a8111145505811b71486"} Jan 21 15:55:26 crc kubenswrapper[4890]: I0121 15:55:26.914817 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.914798202 podStartE2EDuration="2.914798202s" podCreationTimestamp="2026-01-21 15:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:26.912205518 +0000 UTC m=+1409.273647947" watchObservedRunningTime="2026-01-21 15:55:26.914798202 +0000 UTC m=+1409.276240631" Jan 21 15:55:26 crc kubenswrapper[4890]: I0121 15:55:26.939999 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.939972779 podStartE2EDuration="3.939972779s" podCreationTimestamp="2026-01-21 15:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:26.931463847 +0000 UTC m=+1409.292906276" watchObservedRunningTime="2026-01-21 15:55:26.939972779 +0000 UTC m=+1409.301415198" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.540295 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.609710 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slvf2\" (UniqueName: \"kubernetes.io/projected/db9e685f-c1ae-4780-b678-82ca547207b1-kube-api-access-slvf2\") pod \"db9e685f-c1ae-4780-b678-82ca547207b1\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.609818 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-config-data\") pod \"db9e685f-c1ae-4780-b678-82ca547207b1\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.609909 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9e685f-c1ae-4780-b678-82ca547207b1-logs\") pod \"db9e685f-c1ae-4780-b678-82ca547207b1\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.609950 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-combined-ca-bundle\") pod \"db9e685f-c1ae-4780-b678-82ca547207b1\" (UID: \"db9e685f-c1ae-4780-b678-82ca547207b1\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.611766 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9e685f-c1ae-4780-b678-82ca547207b1-logs" (OuterVolumeSpecName: "logs") pod "db9e685f-c1ae-4780-b678-82ca547207b1" (UID: "db9e685f-c1ae-4780-b678-82ca547207b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.615474 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db9e685f-c1ae-4780-b678-82ca547207b1-kube-api-access-slvf2" (OuterVolumeSpecName: "kube-api-access-slvf2") pod "db9e685f-c1ae-4780-b678-82ca547207b1" (UID: "db9e685f-c1ae-4780-b678-82ca547207b1"). InnerVolumeSpecName "kube-api-access-slvf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.657699 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-config-data" (OuterVolumeSpecName: "config-data") pod "db9e685f-c1ae-4780-b678-82ca547207b1" (UID: "db9e685f-c1ae-4780-b678-82ca547207b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.657722 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db9e685f-c1ae-4780-b678-82ca547207b1" (UID: "db9e685f-c1ae-4780-b678-82ca547207b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.711842 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slvf2\" (UniqueName: \"kubernetes.io/projected/db9e685f-c1ae-4780-b678-82ca547207b1-kube-api-access-slvf2\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.711889 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.711903 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9e685f-c1ae-4780-b678-82ca547207b1-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.711915 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9e685f-c1ae-4780-b678-82ca547207b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.738694 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.813946 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-config-data\") pod \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.814156 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-run-httpd\") pod \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.814230 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-sg-core-conf-yaml\") pod \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.814307 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-log-httpd\") pod \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.814461 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r9l4\" (UniqueName: \"kubernetes.io/projected/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-kube-api-access-9r9l4\") pod \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.814510 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-scripts\") pod \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.814540 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-combined-ca-bundle\") pod \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\" (UID: \"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f\") " Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.818826 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" (UID: "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.821444 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" (UID: "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.828968 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-scripts" (OuterVolumeSpecName: "scripts") pod "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" (UID: "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.846558 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-kube-api-access-9r9l4" (OuterVolumeSpecName: "kube-api-access-9r9l4") pod "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" (UID: "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f"). InnerVolumeSpecName "kube-api-access-9r9l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.872721 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" (UID: "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.910104 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33dedbb4-4f3f-41bc-bbc8-45dc50c1996f","Type":"ContainerDied","Data":"96749d399bb56d8ce0412a82f542361cdb79dda755aff5d9429a2e619d42c4b3"} Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.910639 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.911421 4890 scope.go:117] "RemoveContainer" containerID="37b77d106c7ced19bd7bf74229155a9d719c70a01e1d82b242fa7a0b06b96797" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.912246 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.912973 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db9e685f-c1ae-4780-b678-82ca547207b1","Type":"ContainerDied","Data":"fc060f265d426b37afe36af17ef5ba5bfd71fd812f6b8568b4115f9620af1653"} Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.916716 4890 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.916746 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.916759 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r9l4\" (UniqueName: \"kubernetes.io/projected/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-kube-api-access-9r9l4\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.916773 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.916911 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.917482 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" (UID: "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.950025 4890 scope.go:117] "RemoveContainer" containerID="2968880d67f8d80e261c0cd1700a0c0f861afc00fa584cc1cbe344bcbb428a78" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.959101 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.966932 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-config-data" (OuterVolumeSpecName: "config-data") pod "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" (UID: "33dedbb4-4f3f-41bc-bbc8-45dc50c1996f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.968786 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.991779 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:27 crc kubenswrapper[4890]: E0121 15:55:27.992285 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-central-agent" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992306 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-central-agent" Jan 21 15:55:27 crc kubenswrapper[4890]: E0121 15:55:27.992328 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="proxy-httpd" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992338 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="proxy-httpd" Jan 21 15:55:27 crc kubenswrapper[4890]: E0121 15:55:27.992372 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="sg-core" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992379 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="sg-core" Jan 21 15:55:27 crc kubenswrapper[4890]: E0121 15:55:27.992390 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-api" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992396 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-api" Jan 21 15:55:27 crc kubenswrapper[4890]: E0121 15:55:27.992411 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-notification-agent" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992418 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-notification-agent" Jan 21 15:55:27 crc kubenswrapper[4890]: E0121 15:55:27.992435 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-log" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992444 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-log" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992645 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-log" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992662 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" containerName="nova-api-api" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992676 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="sg-core" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992693 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-notification-agent" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992703 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="proxy-httpd" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.992712 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" containerName="ceilometer-central-agent" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.993911 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:27 crc kubenswrapper[4890]: I0121 15:55:27.998913 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.005166 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.021934 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551a96b4-d28a-4452-85f2-fa7bbeac25a0-logs\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.022068 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-config-data\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.022120 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.022187 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp6p5\" (UniqueName: \"kubernetes.io/projected/551a96b4-d28a-4452-85f2-fa7bbeac25a0-kube-api-access-hp6p5\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.022305 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.022322 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.029319 4890 scope.go:117] "RemoveContainer" containerID="7017c19d128965e3ea5519acb897d64a4a0186fd9ac0a8111145505811b71486" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.053838 4890 scope.go:117] "RemoveContainer" containerID="2473a0b336b6af59b0d120e3f5c762ac26eeffce33827dcc1281e5e5a77e513c" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.078435 4890 scope.go:117] "RemoveContainer" containerID="ed1727f41b6a25356f26403270982000ad0310e419878a73d965b4a85da6db26" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.104607 4890 scope.go:117] "RemoveContainer" containerID="637de34232c5eaa309845cd6c25b6d75489ab3d96c0fb5511356c65eb3487a78" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.123201 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp6p5\" (UniqueName: \"kubernetes.io/projected/551a96b4-d28a-4452-85f2-fa7bbeac25a0-kube-api-access-hp6p5\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.123377 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551a96b4-d28a-4452-85f2-fa7bbeac25a0-logs\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.123435 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-config-data\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.123464 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.125127 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551a96b4-d28a-4452-85f2-fa7bbeac25a0-logs\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.128656 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.128903 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-config-data\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.143935 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp6p5\" (UniqueName: \"kubernetes.io/projected/551a96b4-d28a-4452-85f2-fa7bbeac25a0-kube-api-access-hp6p5\") pod \"nova-api-0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.245877 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.257238 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.277324 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.279748 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.281719 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.281795 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.283300 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.302957 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.326189 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327521 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-scripts\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327575 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327610 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-run-httpd\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327643 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-config-data\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327665 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327752 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-log-httpd\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327777 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpsh\" (UniqueName: \"kubernetes.io/projected/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-kube-api-access-bgpsh\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.327800 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429434 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-log-httpd\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429719 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpsh\" (UniqueName: \"kubernetes.io/projected/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-kube-api-access-bgpsh\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429756 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429790 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-scripts\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429835 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429862 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-run-httpd\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429902 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-config-data\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.429926 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.430843 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-run-httpd\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.431500 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-log-httpd\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.436434 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.436752 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-config-data\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.437800 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.438580 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.445414 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-scripts\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.455246 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpsh\" (UniqueName: \"kubernetes.io/projected/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-kube-api-access-bgpsh\") pod \"ceilometer-0\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.601824 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.774999 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:28 crc kubenswrapper[4890]: I0121 15:55:28.930277 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"551a96b4-d28a-4452-85f2-fa7bbeac25a0","Type":"ContainerStarted","Data":"240860bb9b1299a7dae8aa4d9d05c870241e82727fb4d04a8d828591960ec888"} Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.114529 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:29 crc kubenswrapper[4890]: W0121 15:55:29.119151 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cdcd258_ceb7_47d2_8ab3_bb6b53fd806b.slice/crio-8bc991cdd44ec808e10219c44c1be41b472a91901bcc9a2ff79269abca36dae4 WatchSource:0}: Error finding container 8bc991cdd44ec808e10219c44c1be41b472a91901bcc9a2ff79269abca36dae4: Status 404 returned error can't find the container with id 8bc991cdd44ec808e10219c44c1be41b472a91901bcc9a2ff79269abca36dae4 Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.384277 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.384378 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.924716 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33dedbb4-4f3f-41bc-bbc8-45dc50c1996f" path="/var/lib/kubelet/pods/33dedbb4-4f3f-41bc-bbc8-45dc50c1996f/volumes" Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.925997 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db9e685f-c1ae-4780-b678-82ca547207b1" path="/var/lib/kubelet/pods/db9e685f-c1ae-4780-b678-82ca547207b1/volumes" Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.941543 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"551a96b4-d28a-4452-85f2-fa7bbeac25a0","Type":"ContainerStarted","Data":"e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538"} Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.941622 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"551a96b4-d28a-4452-85f2-fa7bbeac25a0","Type":"ContainerStarted","Data":"2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055"} Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.943727 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerStarted","Data":"961404c8b0869ce32ea7cde0a5a32b1aae6755fd6ed5890bc47463b9dfb14eb3"} Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.943761 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerStarted","Data":"8bc991cdd44ec808e10219c44c1be41b472a91901bcc9a2ff79269abca36dae4"} Jan 21 15:55:29 crc kubenswrapper[4890]: I0121 15:55:29.970819 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.970799298 podStartE2EDuration="2.970799298s" podCreationTimestamp="2026-01-21 15:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:29.957949828 +0000 UTC m=+1412.319392247" watchObservedRunningTime="2026-01-21 15:55:29.970799298 +0000 UTC m=+1412.332241707" Jan 21 15:55:30 crc kubenswrapper[4890]: I0121 15:55:30.281192 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:55:30 crc kubenswrapper[4890]: I0121 15:55:30.954455 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerStarted","Data":"8c341d956ca3c98d75872c09cde9665b204e3d3c9ce374cd92a63f06aac5368e"} Jan 21 15:55:31 crc kubenswrapper[4890]: I0121 15:55:31.235005 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 15:55:31 crc kubenswrapper[4890]: I0121 15:55:31.970047 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerStarted","Data":"2a4704d16b27057f62640f24c0c0b5778da76fc95c4ecb1d8e23ecfa5d3918c7"} Jan 21 15:55:31 crc kubenswrapper[4890]: I0121 15:55:31.973081 4890 generic.go:334] "Generic (PLEG): container finished" podID="bb2e2a4d-9099-4d00-9f68-cd52b6566215" containerID="ac959b1ac528e4de2e66685bb7abda5d333740849f8b9ca6b1161716bbc68588" exitCode=0 Jan 21 15:55:31 crc kubenswrapper[4890]: I0121 15:55:31.973118 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kcplg" event={"ID":"bb2e2a4d-9099-4d00-9f68-cd52b6566215","Type":"ContainerDied","Data":"ac959b1ac528e4de2e66685bb7abda5d333740849f8b9ca6b1161716bbc68588"} Jan 21 15:55:32 crc kubenswrapper[4890]: I0121 15:55:32.987787 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerStarted","Data":"10e08bcfd126bcd3031c721466c71477df23426076cdd2aea2f592db18b977e8"} Jan 21 15:55:32 crc kubenswrapper[4890]: I0121 15:55:32.988448 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.023506 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.610908122 podStartE2EDuration="5.023484991s" podCreationTimestamp="2026-01-21 15:55:28 +0000 UTC" firstStartedPulling="2026-01-21 15:55:29.123253268 +0000 UTC m=+1411.484695677" lastFinishedPulling="2026-01-21 15:55:32.535830137 +0000 UTC m=+1414.897272546" observedRunningTime="2026-01-21 15:55:33.013816981 +0000 UTC m=+1415.375259410" watchObservedRunningTime="2026-01-21 15:55:33.023484991 +0000 UTC m=+1415.384927410" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.381501 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.536085 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-scripts\") pod \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.536142 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-combined-ca-bundle\") pod \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.536183 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-config-data\") pod \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.536233 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4v87\" (UniqueName: \"kubernetes.io/projected/bb2e2a4d-9099-4d00-9f68-cd52b6566215-kube-api-access-p4v87\") pod \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\" (UID: \"bb2e2a4d-9099-4d00-9f68-cd52b6566215\") " Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.542811 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2e2a4d-9099-4d00-9f68-cd52b6566215-kube-api-access-p4v87" (OuterVolumeSpecName: "kube-api-access-p4v87") pod "bb2e2a4d-9099-4d00-9f68-cd52b6566215" (UID: "bb2e2a4d-9099-4d00-9f68-cd52b6566215"). InnerVolumeSpecName "kube-api-access-p4v87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.559580 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-scripts" (OuterVolumeSpecName: "scripts") pod "bb2e2a4d-9099-4d00-9f68-cd52b6566215" (UID: "bb2e2a4d-9099-4d00-9f68-cd52b6566215"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.572498 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-config-data" (OuterVolumeSpecName: "config-data") pod "bb2e2a4d-9099-4d00-9f68-cd52b6566215" (UID: "bb2e2a4d-9099-4d00-9f68-cd52b6566215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.575096 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb2e2a4d-9099-4d00-9f68-cd52b6566215" (UID: "bb2e2a4d-9099-4d00-9f68-cd52b6566215"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.637969 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.638007 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.638019 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2e2a4d-9099-4d00-9f68-cd52b6566215-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.638027 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4v87\" (UniqueName: \"kubernetes.io/projected/bb2e2a4d-9099-4d00-9f68-cd52b6566215-kube-api-access-p4v87\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.997948 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kcplg" event={"ID":"bb2e2a4d-9099-4d00-9f68-cd52b6566215","Type":"ContainerDied","Data":"511daf357604aca908930ccd983ada10696fea4d18532dd9b7110e48d7dd8c20"} Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.998588 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511daf357604aca908930ccd983ada10696fea4d18532dd9b7110e48d7dd8c20" Jan 21 15:55:33 crc kubenswrapper[4890]: I0121 15:55:33.998004 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kcplg" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.078583 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:55:34 crc kubenswrapper[4890]: E0121 15:55:34.080747 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2e2a4d-9099-4d00-9f68-cd52b6566215" containerName="nova-cell1-conductor-db-sync" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.080851 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2e2a4d-9099-4d00-9f68-cd52b6566215" containerName="nova-cell1-conductor-db-sync" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.081425 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2e2a4d-9099-4d00-9f68-cd52b6566215" containerName="nova-cell1-conductor-db-sync" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.082533 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.085236 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.091374 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.249128 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.249625 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq528\" (UniqueName: \"kubernetes.io/projected/477ba084-e185-42c6-a0ae-f5de448a4d13-kube-api-access-nq528\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.249808 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.351317 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq528\" (UniqueName: \"kubernetes.io/projected/477ba084-e185-42c6-a0ae-f5de448a4d13-kube-api-access-nq528\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.351743 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.352181 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.356791 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.361073 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.378121 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq528\" (UniqueName: \"kubernetes.io/projected/477ba084-e185-42c6-a0ae-f5de448a4d13-kube-api-access-nq528\") pod \"nova-cell1-conductor-0\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.386847 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.387319 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.410997 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:34 crc kubenswrapper[4890]: I0121 15:55:34.970010 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:55:35 crc kubenswrapper[4890]: I0121 15:55:35.054241 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"477ba084-e185-42c6-a0ae-f5de448a4d13","Type":"ContainerStarted","Data":"250a8698801696d01d4a92a1011062b3b5e5387c8bc148dac28b63c9ccafca8d"} Jan 21 15:55:35 crc kubenswrapper[4890]: I0121 15:55:35.277612 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:55:35 crc kubenswrapper[4890]: I0121 15:55:35.324121 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:55:35 crc kubenswrapper[4890]: I0121 15:55:35.434731 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:55:35 crc kubenswrapper[4890]: I0121 15:55:35.434779 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:55:36 crc kubenswrapper[4890]: I0121 15:55:36.063844 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"477ba084-e185-42c6-a0ae-f5de448a4d13","Type":"ContainerStarted","Data":"5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6"} Jan 21 15:55:36 crc kubenswrapper[4890]: I0121 15:55:36.078266 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.078224777 podStartE2EDuration="2.078224777s" podCreationTimestamp="2026-01-21 15:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:36.077212372 +0000 UTC m=+1418.438654801" watchObservedRunningTime="2026-01-21 15:55:36.078224777 +0000 UTC m=+1418.439667196" Jan 21 15:55:36 crc kubenswrapper[4890]: I0121 15:55:36.101968 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:55:37 crc kubenswrapper[4890]: I0121 15:55:37.073292 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:38 crc kubenswrapper[4890]: I0121 15:55:38.327547 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:55:38 crc kubenswrapper[4890]: I0121 15:55:38.327833 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:55:39 crc kubenswrapper[4890]: I0121 15:55:39.409607 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:55:39 crc kubenswrapper[4890]: I0121 15:55:39.409628 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:55:44 crc kubenswrapper[4890]: I0121 15:55:44.388639 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:55:44 crc kubenswrapper[4890]: I0121 15:55:44.390312 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:55:44 crc kubenswrapper[4890]: I0121 15:55:44.396198 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:55:44 crc kubenswrapper[4890]: I0121 15:55:44.396639 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:55:44 crc kubenswrapper[4890]: I0121 15:55:44.445864 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.059020 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.104882 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-combined-ca-bundle\") pod \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.105098 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcg4q\" (UniqueName: \"kubernetes.io/projected/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-kube-api-access-dcg4q\") pod \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.105387 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-config-data\") pod \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\" (UID: \"2a6623bf-ad19-4e29-84aa-d16fc10b29a3\") " Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.111439 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-kube-api-access-dcg4q" (OuterVolumeSpecName: "kube-api-access-dcg4q") pod "2a6623bf-ad19-4e29-84aa-d16fc10b29a3" (UID: "2a6623bf-ad19-4e29-84aa-d16fc10b29a3"). InnerVolumeSpecName "kube-api-access-dcg4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.131623 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a6623bf-ad19-4e29-84aa-d16fc10b29a3" (UID: "2a6623bf-ad19-4e29-84aa-d16fc10b29a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.140863 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-config-data" (OuterVolumeSpecName: "config-data") pod "2a6623bf-ad19-4e29-84aa-d16fc10b29a3" (UID: "2a6623bf-ad19-4e29-84aa-d16fc10b29a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.155651 4890 generic.go:334] "Generic (PLEG): container finished" podID="2a6623bf-ad19-4e29-84aa-d16fc10b29a3" containerID="f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80" exitCode=137 Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.155958 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2a6623bf-ad19-4e29-84aa-d16fc10b29a3","Type":"ContainerDied","Data":"f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80"} Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.156032 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2a6623bf-ad19-4e29-84aa-d16fc10b29a3","Type":"ContainerDied","Data":"959d7db2d32a74f4a777d61ca0e7fce7a1949d5005241dc50f2b6ac7666745f2"} Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.156070 4890 scope.go:117] "RemoveContainer" containerID="f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.156616 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.211912 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcg4q\" (UniqueName: \"kubernetes.io/projected/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-kube-api-access-dcg4q\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.211961 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.211975 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6623bf-ad19-4e29-84aa-d16fc10b29a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.219458 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.228165 4890 scope.go:117] "RemoveContainer" containerID="f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80" Jan 21 15:55:46 crc kubenswrapper[4890]: E0121 15:55:46.232275 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80\": container with ID starting with f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80 not found: ID does not exist" containerID="f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.232468 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80"} err="failed to get container status \"f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80\": rpc error: code = NotFound desc = could not find container \"f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80\": container with ID starting with f09848e8d2e55232941474d096d76b4a2e68ae8b419253ea0c8c7a539477fa80 not found: ID does not exist" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.240233 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.252627 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:46 crc kubenswrapper[4890]: E0121 15:55:46.253490 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6623bf-ad19-4e29-84aa-d16fc10b29a3" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.253578 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6623bf-ad19-4e29-84aa-d16fc10b29a3" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.253867 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6623bf-ad19-4e29-84aa-d16fc10b29a3" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.254930 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.258211 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.258720 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.259052 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.262391 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.415729 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.416223 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.416392 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxws7\" (UniqueName: \"kubernetes.io/projected/052ad7d6-6d71-4b3b-962a-db635b2df4a3-kube-api-access-zxws7\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.416555 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.416727 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.517800 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.518156 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.518292 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxws7\" (UniqueName: \"kubernetes.io/projected/052ad7d6-6d71-4b3b-962a-db635b2df4a3-kube-api-access-zxws7\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.518453 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.518616 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.523017 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.523323 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.523783 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.524629 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.537762 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxws7\" (UniqueName: \"kubernetes.io/projected/052ad7d6-6d71-4b3b-962a-db635b2df4a3-kube-api-access-zxws7\") pod \"nova-cell1-novncproxy-0\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:46 crc kubenswrapper[4890]: I0121 15:55:46.577946 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:47 crc kubenswrapper[4890]: I0121 15:55:47.016877 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:55:47 crc kubenswrapper[4890]: W0121 15:55:47.032016 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052ad7d6_6d71_4b3b_962a_db635b2df4a3.slice/crio-7cfa8797413a609429b2f87219d319ac9d7bd8490afb2f5ac097d5fd89588e3b WatchSource:0}: Error finding container 7cfa8797413a609429b2f87219d319ac9d7bd8490afb2f5ac097d5fd89588e3b: Status 404 returned error can't find the container with id 7cfa8797413a609429b2f87219d319ac9d7bd8490afb2f5ac097d5fd89588e3b Jan 21 15:55:47 crc kubenswrapper[4890]: I0121 15:55:47.167450 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"052ad7d6-6d71-4b3b-962a-db635b2df4a3","Type":"ContainerStarted","Data":"7cfa8797413a609429b2f87219d319ac9d7bd8490afb2f5ac097d5fd89588e3b"} Jan 21 15:55:47 crc kubenswrapper[4890]: I0121 15:55:47.928483 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6623bf-ad19-4e29-84aa-d16fc10b29a3" path="/var/lib/kubelet/pods/2a6623bf-ad19-4e29-84aa-d16fc10b29a3/volumes" Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.178106 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"052ad7d6-6d71-4b3b-962a-db635b2df4a3","Type":"ContainerStarted","Data":"03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874"} Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.216514 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.216489367 podStartE2EDuration="2.216489367s" podCreationTimestamp="2026-01-21 15:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:48.194801187 +0000 UTC m=+1430.556243596" watchObservedRunningTime="2026-01-21 15:55:48.216489367 +0000 UTC m=+1430.577931796" Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.332598 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.334519 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.335024 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.339327 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.762663 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:55:48 crc kubenswrapper[4890]: I0121 15:55:48.762785 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.189185 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.194592 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.359420 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-xdkgv"] Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.363476 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.378418 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-xdkgv"] Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.490208 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-config\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.490683 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.490773 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-svc\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.490808 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx84x\" (UniqueName: \"kubernetes.io/projected/212a7372-7b31-40f6-bef8-fc76925be961-kube-api-access-bx84x\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.490834 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.490883 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.592017 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-config\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.592089 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.592166 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-svc\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.592196 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx84x\" (UniqueName: \"kubernetes.io/projected/212a7372-7b31-40f6-bef8-fc76925be961-kube-api-access-bx84x\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.592214 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.592249 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.593315 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.593383 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.593574 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-svc\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.594107 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-config\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.595022 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.619269 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx84x\" (UniqueName: \"kubernetes.io/projected/212a7372-7b31-40f6-bef8-fc76925be961-kube-api-access-bx84x\") pod \"dnsmasq-dns-5ddd577785-xdkgv\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:49 crc kubenswrapper[4890]: I0121 15:55:49.691756 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:50 crc kubenswrapper[4890]: I0121 15:55:50.305153 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-xdkgv"] Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.209840 4890 generic.go:334] "Generic (PLEG): container finished" podID="212a7372-7b31-40f6-bef8-fc76925be961" containerID="3914c5aeda2235f7697d3a660d19be4755cff8e1c1da33467a876d8003763f55" exitCode=0 Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.209912 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" event={"ID":"212a7372-7b31-40f6-bef8-fc76925be961","Type":"ContainerDied","Data":"3914c5aeda2235f7697d3a660d19be4755cff8e1c1da33467a876d8003763f55"} Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.210544 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" event={"ID":"212a7372-7b31-40f6-bef8-fc76925be961","Type":"ContainerStarted","Data":"02c0884537c3d2ac2df7cdf7b76d1c17bbd819da04a8cdb0f5e6ecb766b6b950"} Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.579059 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.707837 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.708514 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="sg-core" containerID="cri-o://2a4704d16b27057f62640f24c0c0b5778da76fc95c4ecb1d8e23ecfa5d3918c7" gracePeriod=30 Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.708566 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="proxy-httpd" containerID="cri-o://10e08bcfd126bcd3031c721466c71477df23426076cdd2aea2f592db18b977e8" gracePeriod=30 Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.708566 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-notification-agent" containerID="cri-o://8c341d956ca3c98d75872c09cde9665b204e3d3c9ce374cd92a63f06aac5368e" gracePeriod=30 Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.708469 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-central-agent" containerID="cri-o://961404c8b0869ce32ea7cde0a5a32b1aae6755fd6ed5890bc47463b9dfb14eb3" gracePeriod=30 Jan 21 15:55:51 crc kubenswrapper[4890]: I0121 15:55:51.717015 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": read tcp 10.217.0.2:60592->10.217.0.195:3000: read: connection reset by peer" Jan 21 15:55:52 crc kubenswrapper[4890]: I0121 15:55:52.222939 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" event={"ID":"212a7372-7b31-40f6-bef8-fc76925be961","Type":"ContainerStarted","Data":"e83ef076a5c80f27ea8f77e9616e9b721e5e3861579511656b623a4c8b0a184d"} Jan 21 15:55:52 crc kubenswrapper[4890]: I0121 15:55:52.223424 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:52 crc kubenswrapper[4890]: I0121 15:55:52.228914 4890 generic.go:334] "Generic (PLEG): container finished" podID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerID="10e08bcfd126bcd3031c721466c71477df23426076cdd2aea2f592db18b977e8" exitCode=0 Jan 21 15:55:52 crc kubenswrapper[4890]: I0121 15:55:52.229155 4890 generic.go:334] "Generic (PLEG): container finished" podID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerID="2a4704d16b27057f62640f24c0c0b5778da76fc95c4ecb1d8e23ecfa5d3918c7" exitCode=2 Jan 21 15:55:52 crc kubenswrapper[4890]: I0121 15:55:52.228998 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerDied","Data":"10e08bcfd126bcd3031c721466c71477df23426076cdd2aea2f592db18b977e8"} Jan 21 15:55:52 crc kubenswrapper[4890]: I0121 15:55:52.229550 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerDied","Data":"2a4704d16b27057f62640f24c0c0b5778da76fc95c4ecb1d8e23ecfa5d3918c7"} Jan 21 15:55:52 crc kubenswrapper[4890]: I0121 15:55:52.247569 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" podStartSLOduration=3.247548446 podStartE2EDuration="3.247548446s" podCreationTimestamp="2026-01-21 15:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:52.242597483 +0000 UTC m=+1434.604039892" watchObservedRunningTime="2026-01-21 15:55:52.247548446 +0000 UTC m=+1434.608990855" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.107532 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.107995 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-log" containerID="cri-o://2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055" gracePeriod=30 Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.108158 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-api" containerID="cri-o://e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538" gracePeriod=30 Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.241760 4890 generic.go:334] "Generic (PLEG): container finished" podID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerID="8c341d956ca3c98d75872c09cde9665b204e3d3c9ce374cd92a63f06aac5368e" exitCode=0 Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.241796 4890 generic.go:334] "Generic (PLEG): container finished" podID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerID="961404c8b0869ce32ea7cde0a5a32b1aae6755fd6ed5890bc47463b9dfb14eb3" exitCode=0 Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.241821 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerDied","Data":"8c341d956ca3c98d75872c09cde9665b204e3d3c9ce374cd92a63f06aac5368e"} Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.241868 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerDied","Data":"961404c8b0869ce32ea7cde0a5a32b1aae6755fd6ed5890bc47463b9dfb14eb3"} Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.243933 4890 generic.go:334] "Generic (PLEG): container finished" podID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerID="2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055" exitCode=143 Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.244076 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"551a96b4-d28a-4452-85f2-fa7bbeac25a0","Type":"ContainerDied","Data":"2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055"} Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.680434 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699081 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-ceilometer-tls-certs\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699206 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-log-httpd\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699280 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-config-data\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699332 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-combined-ca-bundle\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699513 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-scripts\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699701 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-sg-core-conf-yaml\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699760 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-run-httpd\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.699871 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgpsh\" (UniqueName: \"kubernetes.io/projected/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-kube-api-access-bgpsh\") pod \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\" (UID: \"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b\") " Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.700194 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.700508 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.702534 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.714907 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-kube-api-access-bgpsh" (OuterVolumeSpecName: "kube-api-access-bgpsh") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "kube-api-access-bgpsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.733754 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-scripts" (OuterVolumeSpecName: "scripts") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.762221 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.784452 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.802014 4890 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.802040 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.802053 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgpsh\" (UniqueName: \"kubernetes.io/projected/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-kube-api-access-bgpsh\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.802062 4890 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.802070 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.807802 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.843907 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-config-data" (OuterVolumeSpecName: "config-data") pod "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" (UID: "2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.904050 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:53 crc kubenswrapper[4890]: I0121 15:55:53.904084 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.254133 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b","Type":"ContainerDied","Data":"8bc991cdd44ec808e10219c44c1be41b472a91901bcc9a2ff79269abca36dae4"} Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.254186 4890 scope.go:117] "RemoveContainer" containerID="10e08bcfd126bcd3031c721466c71477df23426076cdd2aea2f592db18b977e8" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.254191 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.305331 4890 scope.go:117] "RemoveContainer" containerID="2a4704d16b27057f62640f24c0c0b5778da76fc95c4ecb1d8e23ecfa5d3918c7" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.328365 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.335906 4890 scope.go:117] "RemoveContainer" containerID="8c341d956ca3c98d75872c09cde9665b204e3d3c9ce374cd92a63f06aac5368e" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.373000 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.386610 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:54 crc kubenswrapper[4890]: E0121 15:55:54.387550 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="sg-core" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.387573 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="sg-core" Jan 21 15:55:54 crc kubenswrapper[4890]: E0121 15:55:54.387584 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-central-agent" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.387591 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-central-agent" Jan 21 15:55:54 crc kubenswrapper[4890]: E0121 15:55:54.387623 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-notification-agent" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.387629 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-notification-agent" Jan 21 15:55:54 crc kubenswrapper[4890]: E0121 15:55:54.387728 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="proxy-httpd" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.387752 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="proxy-httpd" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.387758 4890 scope.go:117] "RemoveContainer" containerID="961404c8b0869ce32ea7cde0a5a32b1aae6755fd6ed5890bc47463b9dfb14eb3" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.388570 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-notification-agent" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.388604 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="ceilometer-central-agent" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.388620 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="sg-core" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.388642 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" containerName="proxy-httpd" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.391754 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.393702 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.393989 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.395817 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414104 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-log-httpd\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414157 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-scripts\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414240 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414317 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp7n9\" (UniqueName: \"kubernetes.io/projected/d1913844-e8f8-4e8a-83d0-bb63ca53185e-kube-api-access-gp7n9\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414396 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-run-httpd\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414428 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414450 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-config-data\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.414471 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.423296 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.515707 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-config-data\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.516072 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.516112 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-log-httpd\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.516169 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-scripts\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.516256 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.516414 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp7n9\" (UniqueName: \"kubernetes.io/projected/d1913844-e8f8-4e8a-83d0-bb63ca53185e-kube-api-access-gp7n9\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.516484 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-run-httpd\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.516547 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.518155 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-log-httpd\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.518392 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-run-httpd\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.521312 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-config-data\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.522814 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.523181 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.533510 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.534174 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-scripts\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.536879 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp7n9\" (UniqueName: \"kubernetes.io/projected/d1913844-e8f8-4e8a-83d0-bb63ca53185e-kube-api-access-gp7n9\") pod \"ceilometer-0\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " pod="openstack/ceilometer-0" Jan 21 15:55:54 crc kubenswrapper[4890]: I0121 15:55:54.714728 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:55:55 crc kubenswrapper[4890]: W0121 15:55:55.134116 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1913844_e8f8_4e8a_83d0_bb63ca53185e.slice/crio-3c13fbbd452f17c27fb22e47e79a4b7e1e36612310b7a634b4e0e08af5b9b72b WatchSource:0}: Error finding container 3c13fbbd452f17c27fb22e47e79a4b7e1e36612310b7a634b4e0e08af5b9b72b: Status 404 returned error can't find the container with id 3c13fbbd452f17c27fb22e47e79a4b7e1e36612310b7a634b4e0e08af5b9b72b Jan 21 15:55:55 crc kubenswrapper[4890]: I0121 15:55:55.139044 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:55 crc kubenswrapper[4890]: I0121 15:55:55.265711 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerStarted","Data":"3c13fbbd452f17c27fb22e47e79a4b7e1e36612310b7a634b4e0e08af5b9b72b"} Jan 21 15:55:55 crc kubenswrapper[4890]: I0121 15:55:55.305944 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:55:55 crc kubenswrapper[4890]: I0121 15:55:55.937605 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b" path="/var/lib/kubelet/pods/2cdcd258-ceb7-47d2-8ab3-bb6b53fd806b/volumes" Jan 21 15:55:56 crc kubenswrapper[4890]: I0121 15:55:56.578069 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:56 crc kubenswrapper[4890]: I0121 15:55:56.596238 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.307531 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.464225 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hzndt"] Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.465386 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.467597 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.473232 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.476983 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hzndt"] Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.568068 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.568345 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-scripts\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.568403 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-config-data\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.568510 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ml6\" (UniqueName: \"kubernetes.io/projected/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-kube-api-access-94ml6\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.670072 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ml6\" (UniqueName: \"kubernetes.io/projected/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-kube-api-access-94ml6\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.670226 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.670260 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-scripts\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.670285 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-config-data\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.675994 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.676198 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-scripts\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.676489 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-config-data\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.692162 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ml6\" (UniqueName: \"kubernetes.io/projected/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-kube-api-access-94ml6\") pod \"nova-cell1-cell-mapping-hzndt\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:57 crc kubenswrapper[4890]: I0121 15:55:57.787834 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.065602 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.204281 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-config-data\") pod \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.204393 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-combined-ca-bundle\") pod \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.204473 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp6p5\" (UniqueName: \"kubernetes.io/projected/551a96b4-d28a-4452-85f2-fa7bbeac25a0-kube-api-access-hp6p5\") pod \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.204601 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551a96b4-d28a-4452-85f2-fa7bbeac25a0-logs\") pod \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\" (UID: \"551a96b4-d28a-4452-85f2-fa7bbeac25a0\") " Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.205966 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551a96b4-d28a-4452-85f2-fa7bbeac25a0-logs" (OuterVolumeSpecName: "logs") pod "551a96b4-d28a-4452-85f2-fa7bbeac25a0" (UID: "551a96b4-d28a-4452-85f2-fa7bbeac25a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.213941 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551a96b4-d28a-4452-85f2-fa7bbeac25a0-kube-api-access-hp6p5" (OuterVolumeSpecName: "kube-api-access-hp6p5") pod "551a96b4-d28a-4452-85f2-fa7bbeac25a0" (UID: "551a96b4-d28a-4452-85f2-fa7bbeac25a0"). InnerVolumeSpecName "kube-api-access-hp6p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.231195 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-config-data" (OuterVolumeSpecName: "config-data") pod "551a96b4-d28a-4452-85f2-fa7bbeac25a0" (UID: "551a96b4-d28a-4452-85f2-fa7bbeac25a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.232263 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "551a96b4-d28a-4452-85f2-fa7bbeac25a0" (UID: "551a96b4-d28a-4452-85f2-fa7bbeac25a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.304333 4890 generic.go:334] "Generic (PLEG): container finished" podID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerID="e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538" exitCode=0 Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.305402 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.305562 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"551a96b4-d28a-4452-85f2-fa7bbeac25a0","Type":"ContainerDied","Data":"e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538"} Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.305638 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"551a96b4-d28a-4452-85f2-fa7bbeac25a0","Type":"ContainerDied","Data":"240860bb9b1299a7dae8aa4d9d05c870241e82727fb4d04a8d828591960ec888"} Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.305660 4890 scope.go:117] "RemoveContainer" containerID="e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.307006 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.307033 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551a96b4-d28a-4452-85f2-fa7bbeac25a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.307047 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp6p5\" (UniqueName: \"kubernetes.io/projected/551a96b4-d28a-4452-85f2-fa7bbeac25a0-kube-api-access-hp6p5\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.307058 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551a96b4-d28a-4452-85f2-fa7bbeac25a0-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.308727 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hzndt"] Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.342147 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.350050 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.368858 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:58 crc kubenswrapper[4890]: E0121 15:55:58.369234 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-api" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.369247 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-api" Jan 21 15:55:58 crc kubenswrapper[4890]: E0121 15:55:58.369260 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-log" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.369265 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-log" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.369457 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-api" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.369470 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" containerName="nova-api-log" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.370328 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.372951 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.373458 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.373674 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.381616 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.408730 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qcrn\" (UniqueName: \"kubernetes.io/projected/fad5ed07-a426-48f5-ad25-05f3a38181d9-kube-api-access-5qcrn\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.409085 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.409124 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.409155 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad5ed07-a426-48f5-ad25-05f3a38181d9-logs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.409241 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.409412 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-config-data\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.426582 4890 scope.go:117] "RemoveContainer" containerID="2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.444372 4890 scope.go:117] "RemoveContainer" containerID="e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538" Jan 21 15:55:58 crc kubenswrapper[4890]: E0121 15:55:58.444726 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538\": container with ID starting with e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538 not found: ID does not exist" containerID="e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.444777 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538"} err="failed to get container status \"e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538\": rpc error: code = NotFound desc = could not find container \"e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538\": container with ID starting with e00716a06a2e317e381f0543aca78eba1061d7ccbbe649966e15f620bf590538 not found: ID does not exist" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.444818 4890 scope.go:117] "RemoveContainer" containerID="2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055" Jan 21 15:55:58 crc kubenswrapper[4890]: E0121 15:55:58.445109 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055\": container with ID starting with 2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055 not found: ID does not exist" containerID="2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.445128 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055"} err="failed to get container status \"2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055\": rpc error: code = NotFound desc = could not find container \"2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055\": container with ID starting with 2a99d201f82e35426d43911db9282074389183e0b9e67b30df8ec1b9a91f8055 not found: ID does not exist" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.510004 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-config-data\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.510094 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qcrn\" (UniqueName: \"kubernetes.io/projected/fad5ed07-a426-48f5-ad25-05f3a38181d9-kube-api-access-5qcrn\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.510153 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.510185 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.510212 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad5ed07-a426-48f5-ad25-05f3a38181d9-logs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.510248 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.511541 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad5ed07-a426-48f5-ad25-05f3a38181d9-logs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.515280 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.516198 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-config-data\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.521329 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.524530 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.531167 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qcrn\" (UniqueName: \"kubernetes.io/projected/fad5ed07-a426-48f5-ad25-05f3a38181d9-kube-api-access-5qcrn\") pod \"nova-api-0\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " pod="openstack/nova-api-0" Jan 21 15:55:58 crc kubenswrapper[4890]: I0121 15:55:58.691151 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.246455 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.329666 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hzndt" event={"ID":"66fbb4dd-7cb6-44ca-890b-3d54e9b73462","Type":"ContainerStarted","Data":"7b18fb00e90fa86f50f2bfba1494e010b64f7782eb0d47b4b7973db36cc104b3"} Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.329718 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hzndt" event={"ID":"66fbb4dd-7cb6-44ca-890b-3d54e9b73462","Type":"ContainerStarted","Data":"2fa584eafcef163b7db62cc75ceca1dfc9bd6a6bb9b74c30613408d41a0c155c"} Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.335075 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad5ed07-a426-48f5-ad25-05f3a38181d9","Type":"ContainerStarted","Data":"1582a8603a5576aaf516b567de4c4c4ebb0cd42e2e2d1351c2d78157ced0c75c"} Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.337331 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerStarted","Data":"5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785"} Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.360559 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hzndt" podStartSLOduration=2.360542267 podStartE2EDuration="2.360542267s" podCreationTimestamp="2026-01-21 15:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:55:59.352843496 +0000 UTC m=+1441.714285905" watchObservedRunningTime="2026-01-21 15:55:59.360542267 +0000 UTC m=+1441.721984676" Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.693297 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.769610 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-cvh5s"] Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.770116 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" podUID="55162589-99e0-4b08-931e-79b4cb40b318" containerName="dnsmasq-dns" containerID="cri-o://dae81e651eb0a134bbb1825593c87bc347567b78033ee2ad43a2b417009a5490" gracePeriod=10 Jan 21 15:55:59 crc kubenswrapper[4890]: I0121 15:55:59.940927 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551a96b4-d28a-4452-85f2-fa7bbeac25a0" path="/var/lib/kubelet/pods/551a96b4-d28a-4452-85f2-fa7bbeac25a0/volumes" Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.357735 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad5ed07-a426-48f5-ad25-05f3a38181d9","Type":"ContainerStarted","Data":"d5596adc27d84979eda5a1e96750f82f6ec1860b372f9b7cfa3c3b4bfd6fdd9a"} Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.361857 4890 generic.go:334] "Generic (PLEG): container finished" podID="55162589-99e0-4b08-931e-79b4cb40b318" containerID="dae81e651eb0a134bbb1825593c87bc347567b78033ee2ad43a2b417009a5490" exitCode=0 Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.361905 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" event={"ID":"55162589-99e0-4b08-931e-79b4cb40b318","Type":"ContainerDied","Data":"dae81e651eb0a134bbb1825593c87bc347567b78033ee2ad43a2b417009a5490"} Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.367897 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerStarted","Data":"cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4"} Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.829842 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.881131 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-config\") pod \"55162589-99e0-4b08-931e-79b4cb40b318\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.881271 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbjbr\" (UniqueName: \"kubernetes.io/projected/55162589-99e0-4b08-931e-79b4cb40b318-kube-api-access-dbjbr\") pod \"55162589-99e0-4b08-931e-79b4cb40b318\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.881331 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-svc\") pod \"55162589-99e0-4b08-931e-79b4cb40b318\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.881452 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-nb\") pod \"55162589-99e0-4b08-931e-79b4cb40b318\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.881480 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-swift-storage-0\") pod \"55162589-99e0-4b08-931e-79b4cb40b318\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.881569 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-sb\") pod \"55162589-99e0-4b08-931e-79b4cb40b318\" (UID: \"55162589-99e0-4b08-931e-79b4cb40b318\") " Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.914647 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55162589-99e0-4b08-931e-79b4cb40b318-kube-api-access-dbjbr" (OuterVolumeSpecName: "kube-api-access-dbjbr") pod "55162589-99e0-4b08-931e-79b4cb40b318" (UID: "55162589-99e0-4b08-931e-79b4cb40b318"). InnerVolumeSpecName "kube-api-access-dbjbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.984967 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbjbr\" (UniqueName: \"kubernetes.io/projected/55162589-99e0-4b08-931e-79b4cb40b318-kube-api-access-dbjbr\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:00 crc kubenswrapper[4890]: I0121 15:56:00.988853 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55162589-99e0-4b08-931e-79b4cb40b318" (UID: "55162589-99e0-4b08-931e-79b4cb40b318"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:00.991141 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "55162589-99e0-4b08-931e-79b4cb40b318" (UID: "55162589-99e0-4b08-931e-79b4cb40b318"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.006800 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-config" (OuterVolumeSpecName: "config") pod "55162589-99e0-4b08-931e-79b4cb40b318" (UID: "55162589-99e0-4b08-931e-79b4cb40b318"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.017661 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55162589-99e0-4b08-931e-79b4cb40b318" (UID: "55162589-99e0-4b08-931e-79b4cb40b318"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.018794 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55162589-99e0-4b08-931e-79b4cb40b318" (UID: "55162589-99e0-4b08-931e-79b4cb40b318"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.087058 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.087105 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.087116 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.087129 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.087140 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55162589-99e0-4b08-931e-79b4cb40b318-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.382826 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" event={"ID":"55162589-99e0-4b08-931e-79b4cb40b318","Type":"ContainerDied","Data":"3b3436a483ede244614b434b7e84ded2ba893fe7213c7c52f09d82ca5f621672"} Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.382881 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-cvh5s" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.382890 4890 scope.go:117] "RemoveContainer" containerID="dae81e651eb0a134bbb1825593c87bc347567b78033ee2ad43a2b417009a5490" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.385386 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerStarted","Data":"2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0"} Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.389660 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad5ed07-a426-48f5-ad25-05f3a38181d9","Type":"ContainerStarted","Data":"2c20e3cecb87a1adc8630eddeb9b39d0ca53251d753f97e932856c2ea6e1493a"} Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.404541 4890 scope.go:117] "RemoveContainer" containerID="720dbad3a9e03a9a90a2cf1f8c0cb55417dccb2355a038f87ec98af0aa642518" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.422975 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.4229531890000002 podStartE2EDuration="3.422953189s" podCreationTimestamp="2026-01-21 15:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:01.414309434 +0000 UTC m=+1443.775751843" watchObservedRunningTime="2026-01-21 15:56:01.422953189 +0000 UTC m=+1443.784395598" Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.442025 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-cvh5s"] Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.450966 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-cvh5s"] Jan 21 15:56:01 crc kubenswrapper[4890]: I0121 15:56:01.928259 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55162589-99e0-4b08-931e-79b4cb40b318" path="/var/lib/kubelet/pods/55162589-99e0-4b08-931e-79b4cb40b318/volumes" Jan 21 15:56:03 crc kubenswrapper[4890]: I0121 15:56:03.411855 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerStarted","Data":"dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2"} Jan 21 15:56:03 crc kubenswrapper[4890]: I0121 15:56:03.412524 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-central-agent" containerID="cri-o://5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785" gracePeriod=30 Jan 21 15:56:03 crc kubenswrapper[4890]: I0121 15:56:03.412786 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:56:03 crc kubenswrapper[4890]: I0121 15:56:03.413179 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="proxy-httpd" containerID="cri-o://dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2" gracePeriod=30 Jan 21 15:56:03 crc kubenswrapper[4890]: I0121 15:56:03.413325 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="sg-core" containerID="cri-o://2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0" gracePeriod=30 Jan 21 15:56:03 crc kubenswrapper[4890]: I0121 15:56:03.413411 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-notification-agent" containerID="cri-o://cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4" gracePeriod=30 Jan 21 15:56:03 crc kubenswrapper[4890]: I0121 15:56:03.439443 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.381194958 podStartE2EDuration="9.439421466s" podCreationTimestamp="2026-01-21 15:55:54 +0000 UTC" firstStartedPulling="2026-01-21 15:55:55.136579847 +0000 UTC m=+1437.498022256" lastFinishedPulling="2026-01-21 15:56:02.194806335 +0000 UTC m=+1444.556248764" observedRunningTime="2026-01-21 15:56:03.437909799 +0000 UTC m=+1445.799352208" watchObservedRunningTime="2026-01-21 15:56:03.439421466 +0000 UTC m=+1445.800863885" Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.424346 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerID="dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2" exitCode=0 Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.424687 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerID="2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0" exitCode=2 Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.424700 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerID="cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4" exitCode=0 Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.424407 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerDied","Data":"dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2"} Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.424775 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerDied","Data":"2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0"} Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.424786 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerDied","Data":"cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4"} Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.426747 4890 generic.go:334] "Generic (PLEG): container finished" podID="66fbb4dd-7cb6-44ca-890b-3d54e9b73462" containerID="7b18fb00e90fa86f50f2bfba1494e010b64f7782eb0d47b4b7973db36cc104b3" exitCode=0 Jan 21 15:56:04 crc kubenswrapper[4890]: I0121 15:56:04.426790 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hzndt" event={"ID":"66fbb4dd-7cb6-44ca-890b-3d54e9b73462","Type":"ContainerDied","Data":"7b18fb00e90fa86f50f2bfba1494e010b64f7782eb0d47b4b7973db36cc104b3"} Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.070655 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166518 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-combined-ca-bundle\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166583 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp7n9\" (UniqueName: \"kubernetes.io/projected/d1913844-e8f8-4e8a-83d0-bb63ca53185e-kube-api-access-gp7n9\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166619 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-log-httpd\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166663 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-sg-core-conf-yaml\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166738 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-ceilometer-tls-certs\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166837 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-scripts\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166896 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-config-data\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.166940 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-run-httpd\") pod \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\" (UID: \"d1913844-e8f8-4e8a-83d0-bb63ca53185e\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.167411 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.167561 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.167954 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.167982 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1913844-e8f8-4e8a-83d0-bb63ca53185e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.172430 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-scripts" (OuterVolumeSpecName: "scripts") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.186286 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1913844-e8f8-4e8a-83d0-bb63ca53185e-kube-api-access-gp7n9" (OuterVolumeSpecName: "kube-api-access-gp7n9") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "kube-api-access-gp7n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.206311 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.242130 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.250640 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.270306 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.270344 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp7n9\" (UniqueName: \"kubernetes.io/projected/d1913844-e8f8-4e8a-83d0-bb63ca53185e-kube-api-access-gp7n9\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.270378 4890 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.270389 4890 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.270401 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.298463 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-config-data" (OuterVolumeSpecName: "config-data") pod "d1913844-e8f8-4e8a-83d0-bb63ca53185e" (UID: "d1913844-e8f8-4e8a-83d0-bb63ca53185e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.372469 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1913844-e8f8-4e8a-83d0-bb63ca53185e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.437778 4890 generic.go:334] "Generic (PLEG): container finished" podID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerID="5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785" exitCode=0 Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.437898 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.437906 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerDied","Data":"5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785"} Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.437972 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1913844-e8f8-4e8a-83d0-bb63ca53185e","Type":"ContainerDied","Data":"3c13fbbd452f17c27fb22e47e79a4b7e1e36612310b7a634b4e0e08af5b9b72b"} Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.437992 4890 scope.go:117] "RemoveContainer" containerID="dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.472382 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.482999 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.484635 4890 scope.go:117] "RemoveContainer" containerID="2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.501792 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.502243 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55162589-99e0-4b08-931e-79b4cb40b318" containerName="dnsmasq-dns" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.502262 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="55162589-99e0-4b08-931e-79b4cb40b318" containerName="dnsmasq-dns" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.502275 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-notification-agent" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.502311 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-notification-agent" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.502327 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="proxy-httpd" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.502337 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="proxy-httpd" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.513269 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55162589-99e0-4b08-931e-79b4cb40b318" containerName="init" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513323 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="55162589-99e0-4b08-931e-79b4cb40b318" containerName="init" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.513366 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="sg-core" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513376 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="sg-core" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.513393 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-central-agent" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513399 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-central-agent" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513703 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-notification-agent" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513722 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="sg-core" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513731 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="55162589-99e0-4b08-931e-79b4cb40b318" containerName="dnsmasq-dns" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513749 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="ceilometer-central-agent" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.513760 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" containerName="proxy-httpd" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.515322 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.515432 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.518088 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.518745 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.518956 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.523143 4890 scope.go:117] "RemoveContainer" containerID="cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.552242 4890 scope.go:117] "RemoveContainer" containerID="5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575637 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575730 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-scripts\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575758 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-run-httpd\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575792 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575810 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-config-data\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575823 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575856 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-log-httpd\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.575885 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4lw\" (UniqueName: \"kubernetes.io/projected/ff7b18fd-53f0-48dc-84ae-d706234668f7-kube-api-access-wn4lw\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.581791 4890 scope.go:117] "RemoveContainer" containerID="dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.582423 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2\": container with ID starting with dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2 not found: ID does not exist" containerID="dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.582456 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2"} err="failed to get container status \"dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2\": rpc error: code = NotFound desc = could not find container \"dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2\": container with ID starting with dec99dc42b7fb40c27da55156f71c46c26f3e3e11df526eab086bf6900eaa3e2 not found: ID does not exist" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.582485 4890 scope.go:117] "RemoveContainer" containerID="2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.582756 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0\": container with ID starting with 2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0 not found: ID does not exist" containerID="2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.582785 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0"} err="failed to get container status \"2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0\": rpc error: code = NotFound desc = could not find container \"2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0\": container with ID starting with 2acb281def23972e7dffd9fdebc12210e33f336702b257ddfb32e144cc3f4fb0 not found: ID does not exist" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.582802 4890 scope.go:117] "RemoveContainer" containerID="cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.583061 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4\": container with ID starting with cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4 not found: ID does not exist" containerID="cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.583086 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4"} err="failed to get container status \"cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4\": rpc error: code = NotFound desc = could not find container \"cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4\": container with ID starting with cad4acae841df94a19f0a07f8dcac8ac21b966c6942bcdfea5b83c73d2db76a4 not found: ID does not exist" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.583107 4890 scope.go:117] "RemoveContainer" containerID="5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785" Jan 21 15:56:05 crc kubenswrapper[4890]: E0121 15:56:05.583349 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785\": container with ID starting with 5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785 not found: ID does not exist" containerID="5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.584175 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785"} err="failed to get container status \"5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785\": rpc error: code = NotFound desc = could not find container \"5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785\": container with ID starting with 5cbcd83d897884daad7685f921eeb011b1c6a674a32d928afdc3a23f0f873785 not found: ID does not exist" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678053 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-scripts\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678124 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-run-httpd\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678182 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678215 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678236 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-config-data\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678279 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-log-httpd\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678298 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4lw\" (UniqueName: \"kubernetes.io/projected/ff7b18fd-53f0-48dc-84ae-d706234668f7-kube-api-access-wn4lw\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.678396 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.679064 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-run-httpd\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.679378 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-log-httpd\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.682755 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.688099 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.688316 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-scripts\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.688625 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-config-data\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.689467 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.695752 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4lw\" (UniqueName: \"kubernetes.io/projected/ff7b18fd-53f0-48dc-84ae-d706234668f7-kube-api-access-wn4lw\") pod \"ceilometer-0\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.821675 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.840600 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.887672 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ml6\" (UniqueName: \"kubernetes.io/projected/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-kube-api-access-94ml6\") pod \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.887904 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-scripts\") pod \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.887943 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-config-data\") pod \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.888086 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-combined-ca-bundle\") pod \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\" (UID: \"66fbb4dd-7cb6-44ca-890b-3d54e9b73462\") " Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.893191 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-scripts" (OuterVolumeSpecName: "scripts") pod "66fbb4dd-7cb6-44ca-890b-3d54e9b73462" (UID: "66fbb4dd-7cb6-44ca-890b-3d54e9b73462"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.901700 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-kube-api-access-94ml6" (OuterVolumeSpecName: "kube-api-access-94ml6") pod "66fbb4dd-7cb6-44ca-890b-3d54e9b73462" (UID: "66fbb4dd-7cb6-44ca-890b-3d54e9b73462"). InnerVolumeSpecName "kube-api-access-94ml6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.917628 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-config-data" (OuterVolumeSpecName: "config-data") pod "66fbb4dd-7cb6-44ca-890b-3d54e9b73462" (UID: "66fbb4dd-7cb6-44ca-890b-3d54e9b73462"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.920167 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66fbb4dd-7cb6-44ca-890b-3d54e9b73462" (UID: "66fbb4dd-7cb6-44ca-890b-3d54e9b73462"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.927570 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1913844-e8f8-4e8a-83d0-bb63ca53185e" path="/var/lib/kubelet/pods/d1913844-e8f8-4e8a-83d0-bb63ca53185e/volumes" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.990951 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.990980 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.990992 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94ml6\" (UniqueName: \"kubernetes.io/projected/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-kube-api-access-94ml6\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:05 crc kubenswrapper[4890]: I0121 15:56:05.991000 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66fbb4dd-7cb6-44ca-890b-3d54e9b73462-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.330482 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:56:06 crc kubenswrapper[4890]: W0121 15:56:06.336348 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff7b18fd_53f0_48dc_84ae_d706234668f7.slice/crio-22dd1fc6533ed401198587d5331c0f4e5f601325cc551eacae08824d4cf245b4 WatchSource:0}: Error finding container 22dd1fc6533ed401198587d5331c0f4e5f601325cc551eacae08824d4cf245b4: Status 404 returned error can't find the container with id 22dd1fc6533ed401198587d5331c0f4e5f601325cc551eacae08824d4cf245b4 Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.448807 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hzndt" event={"ID":"66fbb4dd-7cb6-44ca-890b-3d54e9b73462","Type":"ContainerDied","Data":"2fa584eafcef163b7db62cc75ceca1dfc9bd6a6bb9b74c30613408d41a0c155c"} Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.449061 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa584eafcef163b7db62cc75ceca1dfc9bd6a6bb9b74c30613408d41a0c155c" Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.449024 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hzndt" Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.452130 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerStarted","Data":"22dd1fc6533ed401198587d5331c0f4e5f601325cc551eacae08824d4cf245b4"} Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.529725 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.529989 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-log" containerID="cri-o://d5596adc27d84979eda5a1e96750f82f6ec1860b372f9b7cfa3c3b4bfd6fdd9a" gracePeriod=30 Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.530231 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-api" containerID="cri-o://2c20e3cecb87a1adc8630eddeb9b39d0ca53251d753f97e932856c2ea6e1493a" gracePeriod=30 Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.546451 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.546697 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="38032245-27d5-4a93-998b-8fb378d98197" containerName="nova-scheduler-scheduler" containerID="cri-o://433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55" gracePeriod=30 Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.617256 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.617826 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-log" containerID="cri-o://e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd" gracePeriod=30 Jan 21 15:56:06 crc kubenswrapper[4890]: I0121 15:56:06.618682 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-metadata" containerID="cri-o://153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5" gracePeriod=30 Jan 21 15:56:07 crc kubenswrapper[4890]: I0121 15:56:07.462730 4890 generic.go:334] "Generic (PLEG): container finished" podID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerID="2c20e3cecb87a1adc8630eddeb9b39d0ca53251d753f97e932856c2ea6e1493a" exitCode=0 Jan 21 15:56:07 crc kubenswrapper[4890]: I0121 15:56:07.463042 4890 generic.go:334] "Generic (PLEG): container finished" podID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerID="d5596adc27d84979eda5a1e96750f82f6ec1860b372f9b7cfa3c3b4bfd6fdd9a" exitCode=143 Jan 21 15:56:07 crc kubenswrapper[4890]: I0121 15:56:07.462816 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad5ed07-a426-48f5-ad25-05f3a38181d9","Type":"ContainerDied","Data":"2c20e3cecb87a1adc8630eddeb9b39d0ca53251d753f97e932856c2ea6e1493a"} Jan 21 15:56:07 crc kubenswrapper[4890]: I0121 15:56:07.463110 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad5ed07-a426-48f5-ad25-05f3a38181d9","Type":"ContainerDied","Data":"d5596adc27d84979eda5a1e96750f82f6ec1860b372f9b7cfa3c3b4bfd6fdd9a"} Jan 21 15:56:07 crc kubenswrapper[4890]: I0121 15:56:07.464938 4890 generic.go:334] "Generic (PLEG): container finished" podID="6845ac08-f194-417b-be65-16fa5d4fac41" containerID="e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd" exitCode=143 Jan 21 15:56:07 crc kubenswrapper[4890]: I0121 15:56:07.464986 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6845ac08-f194-417b-be65-16fa5d4fac41","Type":"ContainerDied","Data":"e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd"} Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.636090 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.750298 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-combined-ca-bundle\") pod \"fad5ed07-a426-48f5-ad25-05f3a38181d9\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.750533 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-config-data\") pod \"fad5ed07-a426-48f5-ad25-05f3a38181d9\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.750571 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad5ed07-a426-48f5-ad25-05f3a38181d9-logs\") pod \"fad5ed07-a426-48f5-ad25-05f3a38181d9\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.750618 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qcrn\" (UniqueName: \"kubernetes.io/projected/fad5ed07-a426-48f5-ad25-05f3a38181d9-kube-api-access-5qcrn\") pod \"fad5ed07-a426-48f5-ad25-05f3a38181d9\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.750702 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-internal-tls-certs\") pod \"fad5ed07-a426-48f5-ad25-05f3a38181d9\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.750748 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-public-tls-certs\") pod \"fad5ed07-a426-48f5-ad25-05f3a38181d9\" (UID: \"fad5ed07-a426-48f5-ad25-05f3a38181d9\") " Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.751323 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad5ed07-a426-48f5-ad25-05f3a38181d9-logs" (OuterVolumeSpecName: "logs") pod "fad5ed07-a426-48f5-ad25-05f3a38181d9" (UID: "fad5ed07-a426-48f5-ad25-05f3a38181d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.757662 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad5ed07-a426-48f5-ad25-05f3a38181d9-kube-api-access-5qcrn" (OuterVolumeSpecName: "kube-api-access-5qcrn") pod "fad5ed07-a426-48f5-ad25-05f3a38181d9" (UID: "fad5ed07-a426-48f5-ad25-05f3a38181d9"). InnerVolumeSpecName "kube-api-access-5qcrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.781316 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fad5ed07-a426-48f5-ad25-05f3a38181d9" (UID: "fad5ed07-a426-48f5-ad25-05f3a38181d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.786508 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-config-data" (OuterVolumeSpecName: "config-data") pod "fad5ed07-a426-48f5-ad25-05f3a38181d9" (UID: "fad5ed07-a426-48f5-ad25-05f3a38181d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.809274 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fad5ed07-a426-48f5-ad25-05f3a38181d9" (UID: "fad5ed07-a426-48f5-ad25-05f3a38181d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.809695 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fad5ed07-a426-48f5-ad25-05f3a38181d9" (UID: "fad5ed07-a426-48f5-ad25-05f3a38181d9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.853387 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.853422 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.853432 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.853443 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad5ed07-a426-48f5-ad25-05f3a38181d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.853454 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad5ed07-a426-48f5-ad25-05f3a38181d9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:08 crc kubenswrapper[4890]: I0121 15:56:08.853465 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qcrn\" (UniqueName: \"kubernetes.io/projected/fad5ed07-a426-48f5-ad25-05f3a38181d9-kube-api-access-5qcrn\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.489637 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad5ed07-a426-48f5-ad25-05f3a38181d9","Type":"ContainerDied","Data":"1582a8603a5576aaf516b567de4c4c4ebb0cd42e2e2d1351c2d78157ced0c75c"} Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.489986 4890 scope.go:117] "RemoveContainer" containerID="2c20e3cecb87a1adc8630eddeb9b39d0ca53251d753f97e932856c2ea6e1493a" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.489693 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.492057 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerStarted","Data":"2660277560aad838dbebdfb2cd900cfc69db1d476e814c29ad6367cf3448c4ee"} Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.523844 4890 scope.go:117] "RemoveContainer" containerID="d5596adc27d84979eda5a1e96750f82f6ec1860b372f9b7cfa3c3b4bfd6fdd9a" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.530032 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.551334 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.579184 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:56:09 crc kubenswrapper[4890]: E0121 15:56:09.580049 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66fbb4dd-7cb6-44ca-890b-3d54e9b73462" containerName="nova-manage" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.580074 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="66fbb4dd-7cb6-44ca-890b-3d54e9b73462" containerName="nova-manage" Jan 21 15:56:09 crc kubenswrapper[4890]: E0121 15:56:09.580099 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-api" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.580109 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-api" Jan 21 15:56:09 crc kubenswrapper[4890]: E0121 15:56:09.580167 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-log" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.580178 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-log" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.580665 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-log" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.580688 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" containerName="nova-api-api" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.580699 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="66fbb4dd-7cb6-44ca-890b-3d54e9b73462" containerName="nova-manage" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.581941 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.589658 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.589936 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.591189 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.592551 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.673732 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-config-data\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.673800 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-public-tls-certs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.673964 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bczg\" (UniqueName: \"kubernetes.io/projected/84118502-58f0-48b2-b659-7f748311fa22-kube-api-access-9bczg\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.674051 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.674210 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.674332 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84118502-58f0-48b2-b659-7f748311fa22-logs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.775904 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-config-data\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.776001 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-public-tls-certs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.776046 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bczg\" (UniqueName: \"kubernetes.io/projected/84118502-58f0-48b2-b659-7f748311fa22-kube-api-access-9bczg\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.776082 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.776161 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.776224 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84118502-58f0-48b2-b659-7f748311fa22-logs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.778001 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84118502-58f0-48b2-b659-7f748311fa22-logs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.780831 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:42744->10.217.0.192:8775: read: connection reset by peer" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.781060 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:42736->10.217.0.192:8775: read: connection reset by peer" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.781803 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-config-data\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.782499 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.782820 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.783022 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-public-tls-certs\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.814670 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bczg\" (UniqueName: \"kubernetes.io/projected/84118502-58f0-48b2-b659-7f748311fa22-kube-api-access-9bczg\") pod \"nova-api-0\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " pod="openstack/nova-api-0" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.924873 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad5ed07-a426-48f5-ad25-05f3a38181d9" path="/var/lib/kubelet/pods/fad5ed07-a426-48f5-ad25-05f3a38181d9/volumes" Jan 21 15:56:09 crc kubenswrapper[4890]: I0121 15:56:09.928274 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.223915 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.279197 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55 is running failed: container process not found" containerID="433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.279582 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55 is running failed: container process not found" containerID="433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.279856 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55 is running failed: container process not found" containerID="433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.279916 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="38032245-27d5-4a93-998b-8fb378d98197" containerName="nova-scheduler-scheduler" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.298366 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-config-data\") pod \"6845ac08-f194-417b-be65-16fa5d4fac41\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.298430 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft9fv\" (UniqueName: \"kubernetes.io/projected/6845ac08-f194-417b-be65-16fa5d4fac41-kube-api-access-ft9fv\") pod \"6845ac08-f194-417b-be65-16fa5d4fac41\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.298474 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-combined-ca-bundle\") pod \"6845ac08-f194-417b-be65-16fa5d4fac41\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.298527 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6845ac08-f194-417b-be65-16fa5d4fac41-logs\") pod \"6845ac08-f194-417b-be65-16fa5d4fac41\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.298557 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-nova-metadata-tls-certs\") pod \"6845ac08-f194-417b-be65-16fa5d4fac41\" (UID: \"6845ac08-f194-417b-be65-16fa5d4fac41\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.301219 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6845ac08-f194-417b-be65-16fa5d4fac41-logs" (OuterVolumeSpecName: "logs") pod "6845ac08-f194-417b-be65-16fa5d4fac41" (UID: "6845ac08-f194-417b-be65-16fa5d4fac41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.308732 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6845ac08-f194-417b-be65-16fa5d4fac41-kube-api-access-ft9fv" (OuterVolumeSpecName: "kube-api-access-ft9fv") pod "6845ac08-f194-417b-be65-16fa5d4fac41" (UID: "6845ac08-f194-417b-be65-16fa5d4fac41"). InnerVolumeSpecName "kube-api-access-ft9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.336494 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-config-data" (OuterVolumeSpecName: "config-data") pod "6845ac08-f194-417b-be65-16fa5d4fac41" (UID: "6845ac08-f194-417b-be65-16fa5d4fac41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.349505 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6845ac08-f194-417b-be65-16fa5d4fac41" (UID: "6845ac08-f194-417b-be65-16fa5d4fac41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.359737 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6845ac08-f194-417b-be65-16fa5d4fac41" (UID: "6845ac08-f194-417b-be65-16fa5d4fac41"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.400769 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.400804 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft9fv\" (UniqueName: \"kubernetes.io/projected/6845ac08-f194-417b-be65-16fa5d4fac41-kube-api-access-ft9fv\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.400823 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.400834 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6845ac08-f194-417b-be65-16fa5d4fac41-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.400842 4890 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6845ac08-f194-417b-be65-16fa5d4fac41-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.402848 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:56:10 crc kubenswrapper[4890]: W0121 15:56:10.408915 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84118502_58f0_48b2_b659_7f748311fa22.slice/crio-99a71dff937055c422e54e691ad75297d2914d5942943363dc080847f72bdd3b WatchSource:0}: Error finding container 99a71dff937055c422e54e691ad75297d2914d5942943363dc080847f72bdd3b: Status 404 returned error can't find the container with id 99a71dff937055c422e54e691ad75297d2914d5942943363dc080847f72bdd3b Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.506480 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84118502-58f0-48b2-b659-7f748311fa22","Type":"ContainerStarted","Data":"99a71dff937055c422e54e691ad75297d2914d5942943363dc080847f72bdd3b"} Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.509460 4890 generic.go:334] "Generic (PLEG): container finished" podID="6845ac08-f194-417b-be65-16fa5d4fac41" containerID="153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5" exitCode=0 Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.509668 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6845ac08-f194-417b-be65-16fa5d4fac41","Type":"ContainerDied","Data":"153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5"} Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.509700 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6845ac08-f194-417b-be65-16fa5d4fac41","Type":"ContainerDied","Data":"ed89285c51de340c69ea5ef04a8d253a6c4397374d9cbd23852b637ca3972a1c"} Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.509716 4890 scope.go:117] "RemoveContainer" containerID="153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.509916 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.523580 4890 generic.go:334] "Generic (PLEG): container finished" podID="38032245-27d5-4a93-998b-8fb378d98197" containerID="433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55" exitCode=0 Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.523666 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38032245-27d5-4a93-998b-8fb378d98197","Type":"ContainerDied","Data":"433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55"} Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.532636 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.533607 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerStarted","Data":"bdcf29add7cbc483a28d49a26883018699ca78c8f8bcfbac6388fbdd8fd5c94b"} Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.554998 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.560004 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.603335 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-config-data\") pod \"38032245-27d5-4a93-998b-8fb378d98197\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.603578 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqwmr\" (UniqueName: \"kubernetes.io/projected/38032245-27d5-4a93-998b-8fb378d98197-kube-api-access-nqwmr\") pod \"38032245-27d5-4a93-998b-8fb378d98197\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.603676 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-combined-ca-bundle\") pod \"38032245-27d5-4a93-998b-8fb378d98197\" (UID: \"38032245-27d5-4a93-998b-8fb378d98197\") " Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.609165 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38032245-27d5-4a93-998b-8fb378d98197-kube-api-access-nqwmr" (OuterVolumeSpecName: "kube-api-access-nqwmr") pod "38032245-27d5-4a93-998b-8fb378d98197" (UID: "38032245-27d5-4a93-998b-8fb378d98197"). InnerVolumeSpecName "kube-api-access-nqwmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.624012 4890 scope.go:117] "RemoveContainer" containerID="e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.628371 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.628925 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-metadata" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.628944 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-metadata" Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.628960 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38032245-27d5-4a93-998b-8fb378d98197" containerName="nova-scheduler-scheduler" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.628970 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="38032245-27d5-4a93-998b-8fb378d98197" containerName="nova-scheduler-scheduler" Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.628990 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-log" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.629000 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-log" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.629229 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="38032245-27d5-4a93-998b-8fb378d98197" containerName="nova-scheduler-scheduler" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.629250 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-log" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.629265 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" containerName="nova-metadata-metadata" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.630554 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.632503 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.632771 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.653831 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38032245-27d5-4a93-998b-8fb378d98197" (UID: "38032245-27d5-4a93-998b-8fb378d98197"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.653861 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-config-data" (OuterVolumeSpecName: "config-data") pod "38032245-27d5-4a93-998b-8fb378d98197" (UID: "38032245-27d5-4a93-998b-8fb378d98197"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.664420 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705555 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705623 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmtn\" (UniqueName: \"kubernetes.io/projected/4099ef81-b3a1-4e17-af41-48813a488181-kube-api-access-qhmtn\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705642 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4099ef81-b3a1-4e17-af41-48813a488181-logs\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705682 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-config-data\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705703 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705744 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqwmr\" (UniqueName: \"kubernetes.io/projected/38032245-27d5-4a93-998b-8fb378d98197-kube-api-access-nqwmr\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705755 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.705788 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38032245-27d5-4a93-998b-8fb378d98197-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.807137 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.807216 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmtn\" (UniqueName: \"kubernetes.io/projected/4099ef81-b3a1-4e17-af41-48813a488181-kube-api-access-qhmtn\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.807252 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4099ef81-b3a1-4e17-af41-48813a488181-logs\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.807290 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-config-data\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.807328 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.810150 4890 scope.go:117] "RemoveContainer" containerID="153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5" Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.811219 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5\": container with ID starting with 153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5 not found: ID does not exist" containerID="153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.811254 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5"} err="failed to get container status \"153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5\": rpc error: code = NotFound desc = could not find container \"153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5\": container with ID starting with 153a2bb60ee267ac5c716174b919ef393065aba055642264e456b479cc2d64b5 not found: ID does not exist" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.811278 4890 scope.go:117] "RemoveContainer" containerID="e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.811858 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4099ef81-b3a1-4e17-af41-48813a488181-logs\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.812055 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: E0121 15:56:10.814617 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd\": container with ID starting with e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd not found: ID does not exist" containerID="e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.814647 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd"} err="failed to get container status \"e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd\": rpc error: code = NotFound desc = could not find container \"e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd\": container with ID starting with e20137ae99f710fd5463abd4adda9d84724381e17c469004fd583e7a4ddf33fd not found: ID does not exist" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.814992 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-config-data\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.817825 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:10 crc kubenswrapper[4890]: I0121 15:56:10.832424 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmtn\" (UniqueName: \"kubernetes.io/projected/4099ef81-b3a1-4e17-af41-48813a488181-kube-api-access-qhmtn\") pod \"nova-metadata-0\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " pod="openstack/nova-metadata-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.122657 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.544785 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38032245-27d5-4a93-998b-8fb378d98197","Type":"ContainerDied","Data":"ef64cfd15cc24f5c01254f144263dd2ac9c6907779d022995f9b6a208060349c"} Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.545287 4890 scope.go:117] "RemoveContainer" containerID="433d9ad40db3694e711754991542d05202908bd562b85a81c4de1ef2ad783d55" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.545551 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.556322 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerStarted","Data":"e6874597d1e13caa14de2a102072cb91ab0359d88ae4e3beb3a5adaa31d395bd"} Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.558470 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84118502-58f0-48b2-b659-7f748311fa22","Type":"ContainerStarted","Data":"f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9"} Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.558499 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84118502-58f0-48b2-b659-7f748311fa22","Type":"ContainerStarted","Data":"60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d"} Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.621063 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.621031989 podStartE2EDuration="2.621031989s" podCreationTimestamp="2026-01-21 15:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:11.595583516 +0000 UTC m=+1453.957025925" watchObservedRunningTime="2026-01-21 15:56:11.621031989 +0000 UTC m=+1453.982474398" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.627328 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:56:11 crc kubenswrapper[4890]: W0121 15:56:11.643319 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4099ef81_b3a1_4e17_af41_48813a488181.slice/crio-13868da3015f9a8897730019f67e00a5c1630ade212710d3d8f41d620b26efa5 WatchSource:0}: Error finding container 13868da3015f9a8897730019f67e00a5c1630ade212710d3d8f41d620b26efa5: Status 404 returned error can't find the container with id 13868da3015f9a8897730019f67e00a5c1630ade212710d3d8f41d620b26efa5 Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.665887 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.686431 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.713617 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.721568 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.721683 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.724093 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.826205 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtn5\" (UniqueName: \"kubernetes.io/projected/2780ff06-b30a-43e8-97d5-b9477d2713d6-kube-api-access-cqtn5\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.826328 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.826372 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-config-data\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.930210 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqtn5\" (UniqueName: \"kubernetes.io/projected/2780ff06-b30a-43e8-97d5-b9477d2713d6-kube-api-access-cqtn5\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.930868 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.930917 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-config-data\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.938809 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-config-data\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.942067 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38032245-27d5-4a93-998b-8fb378d98197" path="/var/lib/kubelet/pods/38032245-27d5-4a93-998b-8fb378d98197/volumes" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.942762 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6845ac08-f194-417b-be65-16fa5d4fac41" path="/var/lib/kubelet/pods/6845ac08-f194-417b-be65-16fa5d4fac41/volumes" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.959115 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:11 crc kubenswrapper[4890]: I0121 15:56:11.967076 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqtn5\" (UniqueName: \"kubernetes.io/projected/2780ff06-b30a-43e8-97d5-b9477d2713d6-kube-api-access-cqtn5\") pod \"nova-scheduler-0\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " pod="openstack/nova-scheduler-0" Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.094446 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.532180 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:56:12 crc kubenswrapper[4890]: W0121 15:56:12.539152 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2780ff06_b30a_43e8_97d5_b9477d2713d6.slice/crio-8b7810bcc8d4db291b73ea21d4da00a3f230594cd0b1cf2f32cec5750bb3de18 WatchSource:0}: Error finding container 8b7810bcc8d4db291b73ea21d4da00a3f230594cd0b1cf2f32cec5750bb3de18: Status 404 returned error can't find the container with id 8b7810bcc8d4db291b73ea21d4da00a3f230594cd0b1cf2f32cec5750bb3de18 Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.570920 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerStarted","Data":"b1850cb5e39351073cf39f1d0e88018e7526c6b8091783f112754a2815cb88bf"} Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.571445 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.572114 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2780ff06-b30a-43e8-97d5-b9477d2713d6","Type":"ContainerStarted","Data":"8b7810bcc8d4db291b73ea21d4da00a3f230594cd0b1cf2f32cec5750bb3de18"} Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.573556 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4099ef81-b3a1-4e17-af41-48813a488181","Type":"ContainerStarted","Data":"670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f"} Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.573606 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4099ef81-b3a1-4e17-af41-48813a488181","Type":"ContainerStarted","Data":"6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791"} Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.573619 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4099ef81-b3a1-4e17-af41-48813a488181","Type":"ContainerStarted","Data":"13868da3015f9a8897730019f67e00a5c1630ade212710d3d8f41d620b26efa5"} Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.607936 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7151557689999999 podStartE2EDuration="7.607919116s" podCreationTimestamp="2026-01-21 15:56:05 +0000 UTC" firstStartedPulling="2026-01-21 15:56:06.339209915 +0000 UTC m=+1448.700652324" lastFinishedPulling="2026-01-21 15:56:12.231973262 +0000 UTC m=+1454.593415671" observedRunningTime="2026-01-21 15:56:12.597526818 +0000 UTC m=+1454.958969227" watchObservedRunningTime="2026-01-21 15:56:12.607919116 +0000 UTC m=+1454.969361525" Jan 21 15:56:12 crc kubenswrapper[4890]: I0121 15:56:12.622431 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.622292444 podStartE2EDuration="2.622292444s" podCreationTimestamp="2026-01-21 15:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:12.6177161 +0000 UTC m=+1454.979158519" watchObservedRunningTime="2026-01-21 15:56:12.622292444 +0000 UTC m=+1454.983734853" Jan 21 15:56:13 crc kubenswrapper[4890]: I0121 15:56:13.587090 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2780ff06-b30a-43e8-97d5-b9477d2713d6","Type":"ContainerStarted","Data":"d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12"} Jan 21 15:56:13 crc kubenswrapper[4890]: I0121 15:56:13.608024 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.608004253 podStartE2EDuration="2.608004253s" podCreationTimestamp="2026-01-21 15:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:13.607048649 +0000 UTC m=+1455.968491058" watchObservedRunningTime="2026-01-21 15:56:13.608004253 +0000 UTC m=+1455.969446662" Jan 21 15:56:16 crc kubenswrapper[4890]: I0121 15:56:16.123630 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:56:16 crc kubenswrapper[4890]: I0121 15:56:16.123945 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:56:17 crc kubenswrapper[4890]: I0121 15:56:17.095217 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:56:18 crc kubenswrapper[4890]: I0121 15:56:18.762166 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:56:18 crc kubenswrapper[4890]: I0121 15:56:18.762218 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:56:18 crc kubenswrapper[4890]: I0121 15:56:18.762259 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:56:18 crc kubenswrapper[4890]: I0121 15:56:18.762927 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c1674a2bd424bd7189f15c6273406528477da9f8b31d68e03fb7356078df89f"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:56:18 crc kubenswrapper[4890]: I0121 15:56:18.762973 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://6c1674a2bd424bd7189f15c6273406528477da9f8b31d68e03fb7356078df89f" gracePeriod=600 Jan 21 15:56:19 crc kubenswrapper[4890]: I0121 15:56:19.643874 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="6c1674a2bd424bd7189f15c6273406528477da9f8b31d68e03fb7356078df89f" exitCode=0 Jan 21 15:56:19 crc kubenswrapper[4890]: I0121 15:56:19.643978 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"6c1674a2bd424bd7189f15c6273406528477da9f8b31d68e03fb7356078df89f"} Jan 21 15:56:19 crc kubenswrapper[4890]: I0121 15:56:19.644518 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c"} Jan 21 15:56:19 crc kubenswrapper[4890]: I0121 15:56:19.644547 4890 scope.go:117] "RemoveContainer" containerID="d0a634f6e929f7ffc1800d062d4e30092fbcb2b4f2a695698fc22410e40c8906" Jan 21 15:56:19 crc kubenswrapper[4890]: I0121 15:56:19.931993 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:56:19 crc kubenswrapper[4890]: I0121 15:56:19.932329 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:56:20 crc kubenswrapper[4890]: I0121 15:56:20.956737 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:56:20 crc kubenswrapper[4890]: I0121 15:56:20.957630 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:56:21 crc kubenswrapper[4890]: I0121 15:56:21.125096 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:56:21 crc kubenswrapper[4890]: I0121 15:56:21.125536 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:56:22 crc kubenswrapper[4890]: I0121 15:56:22.095578 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:56:22 crc kubenswrapper[4890]: I0121 15:56:22.121967 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:56:22 crc kubenswrapper[4890]: I0121 15:56:22.132725 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:56:22 crc kubenswrapper[4890]: I0121 15:56:22.132782 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:56:22 crc kubenswrapper[4890]: I0121 15:56:22.739832 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:56:29 crc kubenswrapper[4890]: I0121 15:56:29.936046 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:56:29 crc kubenswrapper[4890]: I0121 15:56:29.938465 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:56:29 crc kubenswrapper[4890]: I0121 15:56:29.939923 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:56:29 crc kubenswrapper[4890]: I0121 15:56:29.946165 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:56:30 crc kubenswrapper[4890]: I0121 15:56:30.794727 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:56:30 crc kubenswrapper[4890]: I0121 15:56:30.805067 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:56:31 crc kubenswrapper[4890]: I0121 15:56:31.130648 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:56:31 crc kubenswrapper[4890]: I0121 15:56:31.134273 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:56:31 crc kubenswrapper[4890]: I0121 15:56:31.136078 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:56:31 crc kubenswrapper[4890]: I0121 15:56:31.810755 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:56:35 crc kubenswrapper[4890]: I0121 15:56:35.848575 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:56:55 crc kubenswrapper[4890]: I0121 15:56:55.891088 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1136-account-create-update-w54vk"] Jan 21 15:56:55 crc kubenswrapper[4890]: I0121 15:56:55.908671 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1136-account-create-update-w54vk"] Jan 21 15:56:55 crc kubenswrapper[4890]: I0121 15:56:55.924711 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40c65d3-b792-4fba-b282-4f1943d4f71f" path="/var/lib/kubelet/pods/c40c65d3-b792-4fba-b282-4f1943d4f71f/volumes" Jan 21 15:56:55 crc kubenswrapper[4890]: I0121 15:56:55.969794 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4691-account-create-update-pvhx9"] Jan 21 15:56:55 crc kubenswrapper[4890]: I0121 15:56:55.980608 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4691-account-create-update-pvhx9"] Jan 21 15:56:55 crc kubenswrapper[4890]: I0121 15:56:55.990329 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1136-account-create-update-bc5w2"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.000271 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.003587 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.018110 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa88b847-d54b-4e99-8dee-39c83f0a06d8-operator-scripts\") pod \"placement-1136-account-create-update-bc5w2\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.018708 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h69c\" (UniqueName: \"kubernetes.io/projected/fa88b847-d54b-4e99-8dee-39c83f0a06d8-kube-api-access-5h69c\") pod \"placement-1136-account-create-update-bc5w2\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.030826 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1136-account-create-update-bc5w2"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.085870 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f747l"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.115414 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4691-account-create-update-77kb4"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.117119 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.119721 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.121723 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h69c\" (UniqueName: \"kubernetes.io/projected/fa88b847-d54b-4e99-8dee-39c83f0a06d8-kube-api-access-5h69c\") pod \"placement-1136-account-create-update-bc5w2\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.121909 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa88b847-d54b-4e99-8dee-39c83f0a06d8-operator-scripts\") pod \"placement-1136-account-create-update-bc5w2\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.122035 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmxh\" (UniqueName: \"kubernetes.io/projected/f3ca330d-0795-4c1d-8a5e-12df75f280ba-kube-api-access-vsmxh\") pod \"barbican-4691-account-create-update-77kb4\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.122152 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ca330d-0795-4c1d-8a5e-12df75f280ba-operator-scripts\") pod \"barbican-4691-account-create-update-77kb4\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.123275 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa88b847-d54b-4e99-8dee-39c83f0a06d8-operator-scripts\") pod \"placement-1136-account-create-update-bc5w2\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.132232 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f747l"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.151644 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4691-account-create-update-77kb4"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.169591 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h69c\" (UniqueName: \"kubernetes.io/projected/fa88b847-d54b-4e99-8dee-39c83f0a06d8-kube-api-access-5h69c\") pod \"placement-1136-account-create-update-bc5w2\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.224015 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsmxh\" (UniqueName: \"kubernetes.io/projected/f3ca330d-0795-4c1d-8a5e-12df75f280ba-kube-api-access-vsmxh\") pod \"barbican-4691-account-create-update-77kb4\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.224401 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ca330d-0795-4c1d-8a5e-12df75f280ba-operator-scripts\") pod \"barbican-4691-account-create-update-77kb4\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.225163 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ca330d-0795-4c1d-8a5e-12df75f280ba-operator-scripts\") pod \"barbican-4691-account-create-update-77kb4\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.261082 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.261313 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="defb5f2d-053c-4b32-beb1-d10d70bacce1" containerName="openstackclient" containerID="cri-o://17e25bf33dcd118f48a8e8f7cae037f543abe8f9a7ffe1c912b57bf6e4df359b" gracePeriod=2 Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.263794 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsmxh\" (UniqueName: \"kubernetes.io/projected/f3ca330d-0795-4c1d-8a5e-12df75f280ba-kube-api-access-vsmxh\") pod \"barbican-4691-account-create-update-77kb4\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.302781 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p8whv"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.303962 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.320519 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.324290 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.329660 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdc7k\" (UniqueName: \"kubernetes.io/projected/ef1ee1ae-c8ba-469c-ad49-896510b81e81-kube-api-access-bdc7k\") pod \"root-account-create-update-p8whv\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.329935 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ee1ae-c8ba-469c-ad49-896510b81e81-operator-scripts\") pod \"root-account-create-update-p8whv\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.345947 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.346261 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p8whv"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.375017 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.402397 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-de17-account-create-update-crzrr"] Jan 21 15:56:56 crc kubenswrapper[4890]: E0121 15:56:56.405837 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defb5f2d-053c-4b32-beb1-d10d70bacce1" containerName="openstackclient" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.405851 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="defb5f2d-053c-4b32-beb1-d10d70bacce1" containerName="openstackclient" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.406044 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="defb5f2d-053c-4b32-beb1-d10d70bacce1" containerName="openstackclient" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.406660 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.414936 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.425145 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-de17-account-create-update-crzrr"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.433109 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc7k\" (UniqueName: \"kubernetes.io/projected/ef1ee1ae-c8ba-469c-ad49-896510b81e81-kube-api-access-bdc7k\") pod \"root-account-create-update-p8whv\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.434509 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ee1ae-c8ba-469c-ad49-896510b81e81-operator-scripts\") pod \"root-account-create-update-p8whv\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.435842 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ee1ae-c8ba-469c-ad49-896510b81e81-operator-scripts\") pod \"root-account-create-update-p8whv\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.450081 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.469670 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.470283 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="openstack-network-exporter" containerID="cri-o://f816fbeb470ee262ad039181a4ae9efe8ea0d75924ce11d2ac8682922df4c451" gracePeriod=300 Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.487320 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdc7k\" (UniqueName: \"kubernetes.io/projected/ef1ee1ae-c8ba-469c-ad49-896510b81e81-kube-api-access-bdc7k\") pod \"root-account-create-update-p8whv\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.529041 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-de17-account-create-update-jrxd2"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.540420 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zbh\" (UniqueName: \"kubernetes.io/projected/86742085-590c-4ce5-b694-8a91a90c0b6f-kube-api-access-69zbh\") pod \"neutron-de17-account-create-update-crzrr\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.540640 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86742085-590c-4ce5-b694-8a91a90c0b6f-operator-scripts\") pod \"neutron-de17-account-create-update-crzrr\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: E0121 15:56:56.542675 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 15:56:56 crc kubenswrapper[4890]: E0121 15:56:56.542738 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data podName:9bb9aa52-0895-418e-8e0b-d922948e85a7 nodeName:}" failed. No retries permitted until 2026-01-21 15:56:57.042720293 +0000 UTC m=+1499.404162702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7") : configmap "rabbitmq-cell1-config-data" not found Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.548272 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-de17-account-create-update-jrxd2"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.631825 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="ovsdbserver-nb" containerID="cri-o://7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09" gracePeriod=300 Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.643515 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86742085-590c-4ce5-b694-8a91a90c0b6f-operator-scripts\") pod \"neutron-de17-account-create-update-crzrr\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.643609 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69zbh\" (UniqueName: \"kubernetes.io/projected/86742085-590c-4ce5-b694-8a91a90c0b6f-kube-api-access-69zbh\") pod \"neutron-de17-account-create-update-crzrr\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.643932 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e4f9-account-create-update-x5k9s"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.644613 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86742085-590c-4ce5-b694-8a91a90c0b6f-operator-scripts\") pod \"neutron-de17-account-create-update-crzrr\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.679387 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8whv" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.698654 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zbh\" (UniqueName: \"kubernetes.io/projected/86742085-590c-4ce5-b694-8a91a90c0b6f-kube-api-access-69zbh\") pod \"neutron-de17-account-create-update-crzrr\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.750398 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e4f9-account-create-update-x5k9s"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.808812 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6c5-account-create-update-h5s54"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.824832 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e6c5-account-create-update-h5s54"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.840944 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-thtbf"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.855853 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-thtbf"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.856164 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.873106 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.896276 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.896633 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="openstack-network-exporter" containerID="cri-o://9c227e45c94f7742e46f8728f499fa534251a81e5033658fec415f426bd7319e" gracePeriod=300 Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.913009 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wm8lg"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.937686 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wm8lg"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.952084 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e776-account-create-update-7pzwv"] Jan 21 15:56:56 crc kubenswrapper[4890]: I0121 15:56:56.961603 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e776-account-create-update-7pzwv"] Jan 21 15:56:56 crc kubenswrapper[4890]: E0121 15:56:56.978607 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 15:56:56 crc kubenswrapper[4890]: E0121 15:56:56.978684 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data podName:caae7093-b594-47fb-b863-38d825f0048d nodeName:}" failed. No retries permitted until 2026-01-21 15:56:57.478645741 +0000 UTC m=+1499.840088150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data") pod "rabbitmq-server-0" (UID: "caae7093-b594-47fb-b863-38d825f0048d") : configmap "rabbitmq-config-data" not found Jan 21 15:56:57 crc kubenswrapper[4890]: I0121 15:56:57.054605 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="ovsdbserver-sb" containerID="cri-o://b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0" gracePeriod=300 Jan 21 15:56:57 crc kubenswrapper[4890]: I0121 15:56:57.059977 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ltrrf"] Jan 21 15:56:57 crc kubenswrapper[4890]: I0121 15:56:57.071522 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ltrrf"] Jan 21 15:56:57 crc kubenswrapper[4890]: E0121 15:56:57.079968 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 15:56:57 crc kubenswrapper[4890]: E0121 15:56:57.080030 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data podName:9bb9aa52-0895-418e-8e0b-d922948e85a7 nodeName:}" failed. No retries permitted until 2026-01-21 15:56:58.080016034 +0000 UTC m=+1500.441458433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7") : configmap "rabbitmq-cell1-config-data" not found Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.081734 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-43f6-account-create-update-qd2ns"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.117413 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-43f6-account-create-update-qd2ns"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.144946 4890 generic.go:334] "Generic (PLEG): container finished" podID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerID="9c227e45c94f7742e46f8728f499fa534251a81e5033658fec415f426bd7319e" exitCode=2 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.145004 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e21e0c9-91df-4f87-a32f-30fa3d3fa874","Type":"ContainerDied","Data":"9c227e45c94f7742e46f8728f499fa534251a81e5033658fec415f426bd7319e"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.148303 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.148557 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="ovn-northd" containerID="cri-o://e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.149014 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="openstack-network-exporter" containerID="cri-o://abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.183105 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3ab783d9-382b-4b61-85f0-f4a82160effe/ovsdbserver-nb/0.log" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.183145 4890 generic.go:334] "Generic (PLEG): container finished" podID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerID="f816fbeb470ee262ad039181a4ae9efe8ea0d75924ce11d2ac8682922df4c451" exitCode=2 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.183161 4890 generic.go:334] "Generic (PLEG): container finished" podID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerID="7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.183181 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ab783d9-382b-4b61-85f0-f4a82160effe","Type":"ContainerDied","Data":"f816fbeb470ee262ad039181a4ae9efe8ea0d75924ce11d2ac8682922df4c451"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.183203 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ab783d9-382b-4b61-85f0-f4a82160effe","Type":"ContainerDied","Data":"7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.190015 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d5d0-account-create-update-nlvhz"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.214854 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d5d0-account-create-update-nlvhz"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.231586 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-f8v9z"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.240758 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-f8v9z"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.253327 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-skk7h"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.267398 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-skk7h"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.288675 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-dfk6x"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.301881 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pmrch"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.315492 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zk2ll"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.346146 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-zk2ll"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.359986 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-5wh28"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.360268 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-5wh28" podUID="57d2ee81-accb-4ff7-8fa6-52ed7d728258" containerName="openstack-network-exporter" containerID="cri-o://e1aa6bfb45b550829709119ceae8ae53f1b530480df2a6e2a81fbe2d0d43a190" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.371128 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1136-account-create-update-bc5w2"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.386657 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-688fbc5db-f9csp"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.387145 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-688fbc5db-f9csp" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-log" containerID="cri-o://98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.387582 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-688fbc5db-f9csp" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-api" containerID="cri-o://1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: W0121 15:56:57.393112 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa88b847_d54b_4e99_8dee_39c83f0a06d8.slice/crio-69801fdd15c722ce03ed40c1d26686cac788b529eac6f8cff5dc67f90aa64bb2 WatchSource:0}: Error finding container 69801fdd15c722ce03ed40c1d26686cac788b529eac6f8cff5dc67f90aa64bb2: Status 404 returned error can't find the container with id 69801fdd15c722ce03ed40c1d26686cac788b529eac6f8cff5dc67f90aa64bb2 Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.418471 4890 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:56:59 crc kubenswrapper[4890]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: if [ -n "placement" ]; then Jan 21 15:56:59 crc kubenswrapper[4890]: GRANT_DATABASE="placement" Jan 21 15:56:59 crc kubenswrapper[4890]: else Jan 21 15:56:59 crc kubenswrapper[4890]: GRANT_DATABASE="*" Jan 21 15:56:59 crc kubenswrapper[4890]: fi Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: # going for maximum compatibility here: Jan 21 15:56:59 crc kubenswrapper[4890]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 15:56:59 crc kubenswrapper[4890]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 15:56:59 crc kubenswrapper[4890]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 15:56:59 crc kubenswrapper[4890]: # support updates Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: $MYSQL_CMD < logger="UnhandledError" Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.422461 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-1136-account-create-update-bc5w2" podUID="fa88b847-d54b-4e99-8dee-39c83f0a06d8" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.472390 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hzndt"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.480100 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0 is running failed: container process not found" containerID="b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.480529 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0 is running failed: container process not found" containerID="b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.480592 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hzndt"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.480709 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0 is running failed: container process not found" containerID="b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.480728 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="ovsdbserver-sb" Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.491671 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:57.491746 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data podName:caae7093-b594-47fb-b863-38d825f0048d nodeName:}" failed. No retries permitted until 2026-01-21 15:56:58.491728099 +0000 UTC m=+1500.853170508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data") pod "rabbitmq-server-0" (UID: "caae7093-b594-47fb-b863-38d825f0048d") : configmap "rabbitmq-config-data" not found Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.576796 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-frrhq"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.590231 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-frrhq"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.713413 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-xdkgv"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.713676 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" podUID="212a7372-7b31-40f6-bef8-fc76925be961" containerName="dnsmasq-dns" containerID="cri-o://e83ef076a5c80f27ea8f77e9616e9b721e5e3861579511656b623a4c8b0a184d" gracePeriod=10 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.764006 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.764546 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-server" containerID="cri-o://044efc2d7955bb08fe4ff237c3a7e4e25d9ab4e72fa5d3faa7c58ac27561b350" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.764965 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="swift-recon-cron" containerID="cri-o://d353b883ad9d704cf38a51820b942338cdd8c742501c227a8140207f662015e8" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765015 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="rsync" containerID="cri-o://02a34f2bdfeb043480bedf1700ad25535feb47fbbf2cc661cbb62aad70e40a3b" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765053 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-expirer" containerID="cri-o://22335d0f4d49f32620ca48289dd4eb408b7f064e87d7877cc89abf517378da85" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765090 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-updater" containerID="cri-o://1df25c4313e8f39ad26d3ec8a848f850a004e7acdea809912d27022424ac0fec" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765124 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-auditor" containerID="cri-o://291b43ebb5749379f57dbecf17da84aa48983e3db96591d9b7e0aa8d76cc1621" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765160 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-replicator" containerID="cri-o://7c5460ff3a431a21df2a718e89dbf2a5a523b0ee5fdfadf49395a1b74d24c6ab" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765193 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-server" containerID="cri-o://15ae8d44e4e537260de3b6431b223bf85ce1e10d4762ac9a192b7a7606fb94e3" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765243 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-updater" containerID="cri-o://8616884f18e315e3258c25763c5c8cdaea184dc25ba69e7d8e0fa91ac49eaa89" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765285 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-auditor" containerID="cri-o://520ea43d4d0b04096ca36e892322861f691a6670e78931f59f2ea9d885179af5" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765324 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-replicator" containerID="cri-o://ae1658689b220e377c8fba9958351f538aaba5502635f74cadc260a696a44a6f" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765380 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-server" containerID="cri-o://5fa5e2d9ca2571b7361e659ef85544eb30c548cf9527ac1a3be6a7a829e8fbee" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765418 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-reaper" containerID="cri-o://56a854520d26c749a116af4b530898a508240c3791da8d8b127790fb93dfdcc0" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765458 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-auditor" containerID="cri-o://b12bd693bb7580997fa08c163b6c91d65afd3c016d9dbb69b3a75a78a8a917e1" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:57.765506 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-replicator" containerID="cri-o://ec758b8a6824700021b92bcf01c6881e87a7af7bbc0acf6895ec0b0549188a0c" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.026853 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f759e91-6dab-4432-9431-ce312918c7e7" path="/var/lib/kubelet/pods/2f759e91-6dab-4432-9431-ce312918c7e7/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.042002 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c20b0a-f576-475e-846d-75442d91073d" path="/var/lib/kubelet/pods/34c20b0a-f576-475e-846d-75442d91073d/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.057479 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353bc295-c08f-40a8-97eb-a6d110737f71" path="/var/lib/kubelet/pods/353bc295-c08f-40a8-97eb-a6d110737f71/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.069706 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d291e4f-5daf-4e1a-888f-10df2538d171" path="/var/lib/kubelet/pods/3d291e4f-5daf-4e1a-888f-10df2538d171/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.077395 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d621d1-f812-4467-aeee-2ed0da3d68ac" path="/var/lib/kubelet/pods/55d621d1-f812-4467-aeee-2ed0da3d68ac/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.101078 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e75f4bb-e544-49f4-88ba-ed75d8d0365f" path="/var/lib/kubelet/pods/5e75f4bb-e544-49f4-88ba-ed75d8d0365f/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.111131 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66fbb4dd-7cb6-44ca-890b-3d54e9b73462" path="/var/lib/kubelet/pods/66fbb4dd-7cb6-44ca-890b-3d54e9b73462/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.115521 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700a77fe-9836-4979-8c95-7054c3d8d42a" path="/var/lib/kubelet/pods/700a77fe-9836-4979-8c95-7054c3d8d42a/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.117331 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5da046-eade-47f6-91bc-2f25e44a4c85" path="/var/lib/kubelet/pods/7a5da046-eade-47f6-91bc-2f25e44a4c85/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.136564 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a501566d-03dd-40b1-bda9-8c6173d9292f" path="/var/lib/kubelet/pods/a501566d-03dd-40b1-bda9-8c6173d9292f/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.140909 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd" path="/var/lib/kubelet/pods/b7a11fea-ce4e-4a73-81ba-3a9304f2ddcd/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.142475 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c48566-f878-4861-ac44-4e2ea1c107a4" path="/var/lib/kubelet/pods/b8c48566-f878-4861-ac44-4e2ea1c107a4/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.145998 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd07a67c-449b-4a93-8af5-b050a682d06b" path="/var/lib/kubelet/pods/bd07a67c-449b-4a93-8af5-b050a682d06b/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.154335 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced3b279-b256-483b-af6f-3b13721f1ef8" path="/var/lib/kubelet/pods/ced3b279-b256-483b-af6f-3b13721f1ef8/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.158685 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08ecdf9-e34c-476c-99d6-f2e7db2b5129" path="/var/lib/kubelet/pods/d08ecdf9-e34c-476c-99d6-f2e7db2b5129/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.160320 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb40c3a-bdb9-4fd1-a722-843b14bad9d8" path="/var/lib/kubelet/pods/deb40c3a-bdb9-4fd1-a722-843b14bad9d8/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161011 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161042 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5585884bc-vnz4h"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161059 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161072 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-s9xcj"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161086 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-s9xcj"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161328 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api-log" containerID="cri-o://766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161614 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="cinder-scheduler" containerID="cri-o://17017fb4db752be398957128e72379f1e6bbd55f2c985855c266996c3fbae23f" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.161827 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5585884bc-vnz4h" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-api" containerID="cri-o://7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.162072 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api" containerID="cri-o://7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.162259 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="probe" containerID="cri-o://ac358f25d3bc11ecfd3d8286ee71238981958d5ba551cfdc752cc98b87178c26" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.162313 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5585884bc-vnz4h" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-httpd" containerID="cri-o://af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.179928 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.180162 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-log" containerID="cri-o://1ca3498c72178f6185568c6444f79a4b05e9c4a827b67e2ab8184900041c243b" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.180339 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-httpd" containerID="cri-o://449855515a900befe1127318232d23ee1ce08ab1fc81e724dd3ee85e1bdccca0" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.182744 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.182791 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data podName:9bb9aa52-0895-418e-8e0b-d922948e85a7 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:00.182776445 +0000 UTC m=+1502.544218854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7") : configmap "rabbitmq-cell1-config-data" not found Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.201843 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-lwpq6"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.260560 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-lwpq6"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.292560 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.302960 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1136-account-create-update-bc5w2"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.309408 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.310777 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-log" containerID="cri-o://a0f8f3b3b110e555d59db6b93fc91f9b56e10fd7253b81778b2e41c868e02c8a" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.311196 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-httpd" containerID="cri-o://9e0291aac0c698ccda6b3ca51011fe12c6a3dfe3353a4fd388da9648e8a82def" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.319071 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-lb47b"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.323086 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-lb47b"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.330860 4890 generic.go:334] "Generic (PLEG): container finished" podID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerID="98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.330933 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-688fbc5db-f9csp" event={"ID":"2a25c82c-f72c-4ecb-a760-a568761bd5f2","Type":"ContainerDied","Data":"98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.351601 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerName="rabbitmq" containerID="cri-o://489037191e7d74a2730eac1c46abc09d34fce2781e436638fcd47291281cfd30" gracePeriod=604800 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.367017 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4e21e0c9-91df-4f87-a32f-30fa3d3fa874/ovsdbserver-sb/0.log" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.367070 4890 generic.go:334] "Generic (PLEG): container finished" podID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerID="b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.367166 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e21e0c9-91df-4f87-a32f-30fa3d3fa874","Type":"ContainerDied","Data":"b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.382880 4890 generic.go:334] "Generic (PLEG): container finished" podID="212a7372-7b31-40f6-bef8-fc76925be961" containerID="e83ef076a5c80f27ea8f77e9616e9b721e5e3861579511656b623a4c8b0a184d" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.382938 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" event={"ID":"212a7372-7b31-40f6-bef8-fc76925be961","Type":"ContainerDied","Data":"e83ef076a5c80f27ea8f77e9616e9b721e5e3861579511656b623a4c8b0a184d"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.384870 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4691-account-create-update-77kb4"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.390486 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1136-account-create-update-bc5w2" event={"ID":"fa88b847-d54b-4e99-8dee-39c83f0a06d8","Type":"ContainerStarted","Data":"69801fdd15c722ce03ed40c1d26686cac788b529eac6f8cff5dc67f90aa64bb2"} Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.426367 4890 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:56:59 crc kubenswrapper[4890]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: if [ -n "placement" ]; then Jan 21 15:56:59 crc kubenswrapper[4890]: GRANT_DATABASE="placement" Jan 21 15:56:59 crc kubenswrapper[4890]: else Jan 21 15:56:59 crc kubenswrapper[4890]: GRANT_DATABASE="*" Jan 21 15:56:59 crc kubenswrapper[4890]: fi Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: # going for maximum compatibility here: Jan 21 15:56:59 crc kubenswrapper[4890]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 15:56:59 crc kubenswrapper[4890]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 15:56:59 crc kubenswrapper[4890]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 15:56:59 crc kubenswrapper[4890]: # support updates Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: $MYSQL_CMD < logger="UnhandledError" Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.427540 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-1136-account-create-update-bc5w2" podUID="fa88b847-d54b-4e99-8dee-39c83f0a06d8" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.427646 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nknm8"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.450447 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nknm8"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466452 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="22335d0f4d49f32620ca48289dd4eb408b7f064e87d7877cc89abf517378da85" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466482 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="291b43ebb5749379f57dbecf17da84aa48983e3db96591d9b7e0aa8d76cc1621" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466491 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="7c5460ff3a431a21df2a718e89dbf2a5a523b0ee5fdfadf49395a1b74d24c6ab" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466502 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="8616884f18e315e3258c25763c5c8cdaea184dc25ba69e7d8e0fa91ac49eaa89" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466511 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="ae1658689b220e377c8fba9958351f538aaba5502635f74cadc260a696a44a6f" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466643 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"22335d0f4d49f32620ca48289dd4eb408b7f064e87d7877cc89abf517378da85"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466725 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"291b43ebb5749379f57dbecf17da84aa48983e3db96591d9b7e0aa8d76cc1621"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466742 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"7c5460ff3a431a21df2a718e89dbf2a5a523b0ee5fdfadf49395a1b74d24c6ab"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466830 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"8616884f18e315e3258c25763c5c8cdaea184dc25ba69e7d8e0fa91ac49eaa89"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.466842 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"ae1658689b220e377c8fba9958351f538aaba5502635f74cadc260a696a44a6f"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.468750 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5wh28_57d2ee81-accb-4ff7-8fa6-52ed7d728258/openstack-network-exporter/0.log" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.468792 4890 generic.go:334] "Generic (PLEG): container finished" podID="57d2ee81-accb-4ff7-8fa6-52ed7d728258" containerID="e1aa6bfb45b550829709119ceae8ae53f1b530480df2a6e2a81fbe2d0d43a190" exitCode=2 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.468852 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wh28" event={"ID":"57d2ee81-accb-4ff7-8fa6-52ed7d728258","Type":"ContainerDied","Data":"e1aa6bfb45b550829709119ceae8ae53f1b530480df2a6e2a81fbe2d0d43a190"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.471212 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.476239 4890 generic.go:334] "Generic (PLEG): container finished" podID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerID="abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052" exitCode=2 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.476275 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"332f4b6c-7fea-4dae-bb46-3c35ee84ba25","Type":"ContainerDied","Data":"abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.491420 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.491764 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-log" containerID="cri-o://6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.492172 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-metadata" containerID="cri-o://670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.496426 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.496822 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-log" containerID="cri-o://60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.497009 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-api" containerID="cri-o://f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.498703 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.498755 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data podName:caae7093-b594-47fb-b863-38d825f0048d nodeName:}" failed. No retries permitted until 2026-01-21 15:57:00.498739068 +0000 UTC m=+1502.860181477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data") pod "rabbitmq-server-0" (UID: "caae7093-b594-47fb-b863-38d825f0048d") : configmap "rabbitmq-config-data" not found Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.520438 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-de17-account-create-update-crzrr"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.529467 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-j254l"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.545877 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-j254l"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.555811 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xgb85"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.563940 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" containerID="cri-o://3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" gracePeriod=29 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.565407 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xgb85"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.572489 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xxnmh"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.579865 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5969dffb49-ng442"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.580182 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5969dffb49-ng442" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker-log" containerID="cri-o://fec4b0c0a2231fb8d38d939d55a6826e9794606484374bcac4d37face3381fe7" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.580281 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5969dffb49-ng442" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker" containerID="cri-o://2ca05563eab7c7837a3f0611a032f1c0a8bc338b86d2e64c4be0a14c487366e0" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.582042 4890 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 15:56:59 crc kubenswrapper[4890]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 15:56:59 crc kubenswrapper[4890]: + source /usr/local/bin/container-scripts/functions Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNBridge=br-int Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNRemote=tcp:localhost:6642 Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNEncapType=geneve Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNAvailabilityZones= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ EnableChassisAsGateway=true Jan 21 15:56:59 crc kubenswrapper[4890]: ++ PhysicalNetworks= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNHostName= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 15:56:59 crc kubenswrapper[4890]: ++ ovs_dir=/var/lib/openvswitch Jan 21 15:56:59 crc kubenswrapper[4890]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 15:56:59 crc kubenswrapper[4890]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 15:56:59 crc kubenswrapper[4890]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + sleep 0.5 Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + sleep 0.5 Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + cleanup_ovsdb_server_semaphore Jan 21 15:56:59 crc kubenswrapper[4890]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 15:56:59 crc kubenswrapper[4890]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 15:56:59 crc kubenswrapper[4890]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-dfk6x" message=< Jan 21 15:56:59 crc kubenswrapper[4890]: Exiting ovsdb-server (5) [ OK ] Jan 21 15:56:59 crc kubenswrapper[4890]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 15:56:59 crc kubenswrapper[4890]: + source /usr/local/bin/container-scripts/functions Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNBridge=br-int Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNRemote=tcp:localhost:6642 Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNEncapType=geneve Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNAvailabilityZones= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ EnableChassisAsGateway=true Jan 21 15:56:59 crc kubenswrapper[4890]: ++ PhysicalNetworks= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNHostName= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 15:56:59 crc kubenswrapper[4890]: ++ ovs_dir=/var/lib/openvswitch Jan 21 15:56:59 crc kubenswrapper[4890]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 15:56:59 crc kubenswrapper[4890]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 15:56:59 crc kubenswrapper[4890]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + sleep 0.5 Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + sleep 0.5 Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + cleanup_ovsdb_server_semaphore Jan 21 15:56:59 crc kubenswrapper[4890]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 15:56:59 crc kubenswrapper[4890]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 15:56:59 crc kubenswrapper[4890]: > Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:58.582072 4890 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 15:56:59 crc kubenswrapper[4890]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 15:56:59 crc kubenswrapper[4890]: + source /usr/local/bin/container-scripts/functions Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNBridge=br-int Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNRemote=tcp:localhost:6642 Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNEncapType=geneve Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNAvailabilityZones= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ EnableChassisAsGateway=true Jan 21 15:56:59 crc kubenswrapper[4890]: ++ PhysicalNetworks= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ OVNHostName= Jan 21 15:56:59 crc kubenswrapper[4890]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 15:56:59 crc kubenswrapper[4890]: ++ ovs_dir=/var/lib/openvswitch Jan 21 15:56:59 crc kubenswrapper[4890]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 15:56:59 crc kubenswrapper[4890]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 15:56:59 crc kubenswrapper[4890]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + sleep 0.5 Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + sleep 0.5 Jan 21 15:56:59 crc kubenswrapper[4890]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 15:56:59 crc kubenswrapper[4890]: + cleanup_ovsdb_server_semaphore Jan 21 15:56:59 crc kubenswrapper[4890]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 15:56:59 crc kubenswrapper[4890]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 15:56:59 crc kubenswrapper[4890]: > pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" containerID="cri-o://283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.582106 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" containerID="cri-o://283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" gracePeriod=29 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.589262 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xxnmh"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.603591 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-q2zlv"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.610621 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-q2zlv"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.620506 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6946c9f5b4-2l82t"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.620753 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6946c9f5b4-2l82t" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api-log" containerID="cri-o://6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.621145 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6946c9f5b4-2l82t" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api" containerID="cri-o://9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.653782 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-846846cd4b-wmjvw"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.654126 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener-log" containerID="cri-o://c2f49312d4e89e99e690840cb5a943c78b8a717a32ae94d1b8fa6f3f50c660c1" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.654242 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener" containerID="cri-o://9d30851de1888098b6eefb06ebbe23168f3d78920011b9933b08aff11f05029f" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.673427 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.688456 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.688900 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="052ad7d6-6d71-4b3b-962a-db635b2df4a3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.701137 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p8whv"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.710652 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerName="galera" containerID="cri-o://ee8636883cf7ef685bc793e2761b19d6a77deb5c7898b985a0cc704d99683d91" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:58.718230 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="caae7093-b594-47fb-b863-38d825f0048d" containerName="rabbitmq" containerID="cri-o://ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2" gracePeriod=604800 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.273419 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64d44774fc-92wps"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.274433 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64d44774fc-92wps" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-server" containerID="cri-o://4a223e2232d09a7902cafe5997f0744b43b30ca16b7805665ca1778aa131272b" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.274537 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64d44774fc-92wps" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-httpd" containerID="cri-o://900fd7400b1809198eec2f87e30f7758ce7b4277e57b7f022112ff25e938935f" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.339369 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.339618 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2780ff06-b30a-43e8-97d5-b9477d2713d6" containerName="nova-scheduler-scheduler" containerID="cri-o://d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.366148 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmzvq"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.376730 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.376960 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="50c99515-8e62-4e54-9ffc-e9294db2dc4f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.399409 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmzvq"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.402849 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kcplg"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.414237 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.414547 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="477ba084-e185-42c6-a0ae-f5de448a4d13" containerName="nova-cell1-conductor-conductor" containerID="cri-o://5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" gracePeriod=30 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.424783 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kcplg"] Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.489167 4890 generic.go:334] "Generic (PLEG): container finished" podID="371fefce-bb16-4c48-ac5a-01885e77c090" containerID="766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.489239 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"371fefce-bb16-4c48-ac5a-01885e77c090","Type":"ContainerDied","Data":"766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.492206 4890 generic.go:334] "Generic (PLEG): container finished" podID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerID="ac358f25d3bc11ecfd3d8286ee71238981958d5ba551cfdc752cc98b87178c26" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.492277 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1463d4e1-9ed2-4f45-b473-a94d18a4156f","Type":"ContainerDied","Data":"ac358f25d3bc11ecfd3d8286ee71238981958d5ba551cfdc752cc98b87178c26"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.505747 4890 generic.go:334] "Generic (PLEG): container finished" podID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerID="1ca3498c72178f6185568c6444f79a4b05e9c4a827b67e2ab8184900041c243b" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.505823 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e775a69e-619f-4920-8fc9-6d216e400c0e","Type":"ContainerDied","Data":"1ca3498c72178f6185568c6444f79a4b05e9c4a827b67e2ab8184900041c243b"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.514181 4890 generic.go:334] "Generic (PLEG): container finished" podID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.514272 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dfk6x" event={"ID":"233162f3-fe28-4476-bc40-eb4b138ae68a","Type":"ContainerDied","Data":"283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.517339 4890 generic.go:334] "Generic (PLEG): container finished" podID="84118502-58f0-48b2-b659-7f748311fa22" containerID="60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.517440 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84118502-58f0-48b2-b659-7f748311fa22","Type":"ContainerDied","Data":"60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.520899 4890 generic.go:334] "Generic (PLEG): container finished" podID="697e1d3a-fab0-471b-bea8-43212f489fec" containerID="a0f8f3b3b110e555d59db6b93fc91f9b56e10fd7253b81778b2e41c868e02c8a" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.521022 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"697e1d3a-fab0-471b-bea8-43212f489fec","Type":"ContainerDied","Data":"a0f8f3b3b110e555d59db6b93fc91f9b56e10fd7253b81778b2e41c868e02c8a"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.524077 4890 generic.go:334] "Generic (PLEG): container finished" podID="4099ef81-b3a1-4e17-af41-48813a488181" containerID="6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.524222 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4099ef81-b3a1-4e17-af41-48813a488181","Type":"ContainerDied","Data":"6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.527902 4890 generic.go:334] "Generic (PLEG): container finished" podID="0365c802-8af2-4230-a2e7-90959d273419" containerID="c2f49312d4e89e99e690840cb5a943c78b8a717a32ae94d1b8fa6f3f50c660c1" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.527989 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" event={"ID":"0365c802-8af2-4230-a2e7-90959d273419","Type":"ContainerDied","Data":"c2f49312d4e89e99e690840cb5a943c78b8a717a32ae94d1b8fa6f3f50c660c1"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.530579 4890 generic.go:334] "Generic (PLEG): container finished" podID="defb5f2d-053c-4b32-beb1-d10d70bacce1" containerID="17e25bf33dcd118f48a8e8f7cae037f543abe8f9a7ffe1c912b57bf6e4df359b" exitCode=137 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.533990 4890 generic.go:334] "Generic (PLEG): container finished" podID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerID="af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.534055 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5585884bc-vnz4h" event={"ID":"902e1b21-9fb7-4302-b0f7-a832c7a42ca1","Type":"ContainerDied","Data":"af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.554990 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="02a34f2bdfeb043480bedf1700ad25535feb47fbbf2cc661cbb62aad70e40a3b" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.555020 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="1df25c4313e8f39ad26d3ec8a848f850a004e7acdea809912d27022424ac0fec" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.555030 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="15ae8d44e4e537260de3b6431b223bf85ce1e10d4762ac9a192b7a7606fb94e3" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.555122 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"02a34f2bdfeb043480bedf1700ad25535feb47fbbf2cc661cbb62aad70e40a3b"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.555198 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"1df25c4313e8f39ad26d3ec8a848f850a004e7acdea809912d27022424ac0fec"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.555214 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"15ae8d44e4e537260de3b6431b223bf85ce1e10d4762ac9a192b7a7606fb94e3"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.555226 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"520ea43d4d0b04096ca36e892322861f691a6670e78931f59f2ea9d885179af5"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.555037 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="520ea43d4d0b04096ca36e892322861f691a6670e78931f59f2ea9d885179af5" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557119 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="5fa5e2d9ca2571b7361e659ef85544eb30c548cf9527ac1a3be6a7a829e8fbee" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557128 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="56a854520d26c749a116af4b530898a508240c3791da8d8b127790fb93dfdcc0" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557137 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="b12bd693bb7580997fa08c163b6c91d65afd3c016d9dbb69b3a75a78a8a917e1" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557143 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="ec758b8a6824700021b92bcf01c6881e87a7af7bbc0acf6895ec0b0549188a0c" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557149 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="044efc2d7955bb08fe4ff237c3a7e4e25d9ab4e72fa5d3faa7c58ac27561b350" exitCode=0 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557228 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"5fa5e2d9ca2571b7361e659ef85544eb30c548cf9527ac1a3be6a7a829e8fbee"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557259 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"56a854520d26c749a116af4b530898a508240c3791da8d8b127790fb93dfdcc0"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557270 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"b12bd693bb7580997fa08c163b6c91d65afd3c016d9dbb69b3a75a78a8a917e1"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557278 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"ec758b8a6824700021b92bcf01c6881e87a7af7bbc0acf6895ec0b0549188a0c"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.557287 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"044efc2d7955bb08fe4ff237c3a7e4e25d9ab4e72fa5d3faa7c58ac27561b350"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.577546 4890 generic.go:334] "Generic (PLEG): container finished" podID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerID="fec4b0c0a2231fb8d38d939d55a6826e9794606484374bcac4d37face3381fe7" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.577638 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5969dffb49-ng442" event={"ID":"d3466f4b-2d63-490d-bae0-0921a4874daa","Type":"ContainerDied","Data":"fec4b0c0a2231fb8d38d939d55a6826e9794606484374bcac4d37face3381fe7"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.589007 4890 generic.go:334] "Generic (PLEG): container finished" podID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerID="6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333" exitCode=143 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.589205 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6946c9f5b4-2l82t" event={"ID":"33bbda2a-fde6-466f-92c8-88556941b8a3","Type":"ContainerDied","Data":"6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333"} Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.696303 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4691-account-create-update-77kb4"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:59.739195 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09 is running failed: container process not found" containerID="7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:59.739723 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09 is running failed: container process not found" containerID="7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:59.739969 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09 is running failed: container process not found" containerID="7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:59.740022 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="ovsdbserver-nb" Jan 21 15:56:59 crc kubenswrapper[4890]: W0121 15:56:59.771965 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3ca330d_0795_4c1d_8a5e_12df75f280ba.slice/crio-6e8e7c123b067cc133762a2ce16cbfa6868d7222a6de39ed4ee63d3dd8000a69 WatchSource:0}: Error finding container 6e8e7c123b067cc133762a2ce16cbfa6868d7222a6de39ed4ee63d3dd8000a69: Status 404 returned error can't find the container with id 6e8e7c123b067cc133762a2ce16cbfa6868d7222a6de39ed4ee63d3dd8000a69 Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.786788 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:59.794866 4890 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:56:59 crc kubenswrapper[4890]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: if [ -n "barbican" ]; then Jan 21 15:56:59 crc kubenswrapper[4890]: GRANT_DATABASE="barbican" Jan 21 15:56:59 crc kubenswrapper[4890]: else Jan 21 15:56:59 crc kubenswrapper[4890]: GRANT_DATABASE="*" Jan 21 15:56:59 crc kubenswrapper[4890]: fi Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: # going for maximum compatibility here: Jan 21 15:56:59 crc kubenswrapper[4890]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 15:56:59 crc kubenswrapper[4890]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 15:56:59 crc kubenswrapper[4890]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 15:56:59 crc kubenswrapper[4890]: # support updates Jan 21 15:56:59 crc kubenswrapper[4890]: Jan 21 15:56:59 crc kubenswrapper[4890]: $MYSQL_CMD < logger="UnhandledError" Jan 21 15:56:59 crc kubenswrapper[4890]: E0121 15:56:59.796685 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-4691-account-create-update-77kb4" podUID="f3ca330d-0795-4c1d-8a5e-12df75f280ba" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.937320 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013a38e6-319d-4fd9-bba3-a05b6c10acd9" path="/var/lib/kubelet/pods/013a38e6-319d-4fd9-bba3-a05b6c10acd9/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.937999 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a95a75-e775-40eb-8e62-74b4e9b04f1f" path="/var/lib/kubelet/pods/11a95a75-e775-40eb-8e62-74b4e9b04f1f/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.938798 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="295fdf34-4879-4ba5-993a-424850ac8e46" path="/var/lib/kubelet/pods/295fdf34-4879-4ba5-993a-424850ac8e46/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.941988 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6dfa05-969b-4691-84e5-7ca46d82b5c2" path="/var/lib/kubelet/pods/2d6dfa05-969b-4691-84e5-7ca46d82b5c2/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.944992 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a47ebe-8900-4913-b7a1-9988e32cc5dc" path="/var/lib/kubelet/pods/55a47ebe-8900-4913-b7a1-9988e32cc5dc/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.945696 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581dea93-d2d2-45fd-9b38-c0829c031b5c" path="/var/lib/kubelet/pods/581dea93-d2d2-45fd-9b38-c0829c031b5c/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.946317 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="637e1cbc-a769-4deb-926c-ec36b9b6dc61" path="/var/lib/kubelet/pods/637e1cbc-a769-4deb-926c-ec36b9b6dc61/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.947676 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9ab0a2-4598-4893-bf8b-c216f4f4b692" path="/var/lib/kubelet/pods/8f9ab0a2-4598-4893-bf8b-c216f4f4b692/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.949277 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916a58e6-1bc6-47d4-a82d-15979fbf9dea" path="/var/lib/kubelet/pods/916a58e6-1bc6-47d4-a82d-15979fbf9dea/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.950517 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb2e2a4d-9099-4d00-9f68-cd52b6566215" path="/var/lib/kubelet/pods/bb2e2a4d-9099-4d00-9f68-cd52b6566215/volumes" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.965424 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.965636 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4e21e0c9-91df-4f87-a32f-30fa3d3fa874/ovsdbserver-sb/0.log" Jan 21 15:56:59 crc kubenswrapper[4890]: I0121 15:56:59.965717 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.035335 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3ab783d9-382b-4b61-85f0-f4a82160effe/ovsdbserver-nb/0.log" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.035427 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049343 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdb-rundir\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049420 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-scripts\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049477 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-svc\") pod \"212a7372-7b31-40f6-bef8-fc76925be961\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049526 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-nb\") pod \"212a7372-7b31-40f6-bef8-fc76925be961\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049564 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx84x\" (UniqueName: \"kubernetes.io/projected/212a7372-7b31-40f6-bef8-fc76925be961-kube-api-access-bx84x\") pod \"212a7372-7b31-40f6-bef8-fc76925be961\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049615 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdbserver-sb-tls-certs\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049660 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049713 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-config\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049750 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-combined-ca-bundle\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049783 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-sb\") pod \"212a7372-7b31-40f6-bef8-fc76925be961\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049816 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-config\") pod \"212a7372-7b31-40f6-bef8-fc76925be961\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.049940 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-metrics-certs-tls-certs\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.050003 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h548d\" (UniqueName: \"kubernetes.io/projected/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-kube-api-access-h548d\") pod \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\" (UID: \"4e21e0c9-91df-4f87-a32f-30fa3d3fa874\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.050042 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-swift-storage-0\") pod \"212a7372-7b31-40f6-bef8-fc76925be961\" (UID: \"212a7372-7b31-40f6-bef8-fc76925be961\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.055579 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.056850 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-scripts" (OuterVolumeSpecName: "scripts") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.057796 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-config" (OuterVolumeSpecName: "config") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.065963 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.076563 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.076630 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212a7372-7b31-40f6-bef8-fc76925be961-kube-api-access-bx84x" (OuterVolumeSpecName: "kube-api-access-bx84x") pod "212a7372-7b31-40f6-bef8-fc76925be961" (UID: "212a7372-7b31-40f6-bef8-fc76925be961"). InnerVolumeSpecName "kube-api-access-bx84x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.101135 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5wh28_57d2ee81-accb-4ff7-8fa6-52ed7d728258/openstack-network-exporter/0.log" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.101457 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.130829 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-kube-api-access-h548d" (OuterVolumeSpecName: "kube-api-access-h548d") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "kube-api-access-h548d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158253 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqw9b\" (UniqueName: \"kubernetes.io/projected/3ab783d9-382b-4b61-85f0-f4a82160effe-kube-api-access-vqw9b\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158412 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-combined-ca-bundle\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158462 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6zsb\" (UniqueName: \"kubernetes.io/projected/defb5f2d-053c-4b32-beb1-d10d70bacce1-kube-api-access-t6zsb\") pod \"defb5f2d-053c-4b32-beb1-d10d70bacce1\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158562 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdb-rundir\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158612 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config-secret\") pod \"defb5f2d-053c-4b32-beb1-d10d70bacce1\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158641 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-scripts\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158724 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config\") pod \"defb5f2d-053c-4b32-beb1-d10d70bacce1\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158767 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-combined-ca-bundle\") pod \"defb5f2d-053c-4b32-beb1-d10d70bacce1\" (UID: \"defb5f2d-053c-4b32-beb1-d10d70bacce1\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158816 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158877 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-metrics-certs-tls-certs\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.158941 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-config\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.159044 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdbserver-nb-tls-certs\") pod \"3ab783d9-382b-4b61-85f0-f4a82160effe\" (UID: \"3ab783d9-382b-4b61-85f0-f4a82160effe\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.159929 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.159948 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx84x\" (UniqueName: \"kubernetes.io/projected/212a7372-7b31-40f6-bef8-fc76925be961-kube-api-access-bx84x\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.159973 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.160006 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.160019 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h548d\" (UniqueName: \"kubernetes.io/projected/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-kube-api-access-h548d\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.160031 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.164217 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.165576 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-config" (OuterVolumeSpecName: "config") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.166238 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-scripts" (OuterVolumeSpecName: "scripts") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.166607 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab783d9-382b-4b61-85f0-f4a82160effe-kube-api-access-vqw9b" (OuterVolumeSpecName: "kube-api-access-vqw9b") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "kube-api-access-vqw9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.185644 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.200704 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defb5f2d-053c-4b32-beb1-d10d70bacce1-kube-api-access-t6zsb" (OuterVolumeSpecName: "kube-api-access-t6zsb") pod "defb5f2d-053c-4b32-beb1-d10d70bacce1" (UID: "defb5f2d-053c-4b32-beb1-d10d70bacce1"). InnerVolumeSpecName "kube-api-access-t6zsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.200739 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.266160 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-vencrypt-tls-certs\") pod \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.266269 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovs-rundir\") pod \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268072 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-nova-novncproxy-tls-certs\") pod \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268141 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-combined-ca-bundle\") pod \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268207 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-combined-ca-bundle\") pod \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268261 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-config-data\") pod \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268298 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8b8\" (UniqueName: \"kubernetes.io/projected/57d2ee81-accb-4ff7-8fa6-52ed7d728258-kube-api-access-hn8b8\") pod \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268434 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxws7\" (UniqueName: \"kubernetes.io/projected/052ad7d6-6d71-4b3b-962a-db635b2df4a3-kube-api-access-zxws7\") pod \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\" (UID: \"052ad7d6-6d71-4b3b-962a-db635b2df4a3\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268846 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d2ee81-accb-4ff7-8fa6-52ed7d728258-config\") pod \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.268942 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-metrics-certs-tls-certs\") pod \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.269018 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovn-rundir\") pod \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\" (UID: \"57d2ee81-accb-4ff7-8fa6-52ed7d728258\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.269798 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.269831 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.269844 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab783d9-382b-4b61-85f0-f4a82160effe-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.269857 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqw9b\" (UniqueName: \"kubernetes.io/projected/3ab783d9-382b-4b61-85f0-f4a82160effe-kube-api-access-vqw9b\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.269870 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6zsb\" (UniqueName: \"kubernetes.io/projected/defb5f2d-053c-4b32-beb1-d10d70bacce1-kube-api-access-t6zsb\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.269882 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.275399 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.276082 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "57d2ee81-accb-4ff7-8fa6-52ed7d728258" (UID: "57d2ee81-accb-4ff7-8fa6-52ed7d728258"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.302787 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "212a7372-7b31-40f6-bef8-fc76925be961" (UID: "212a7372-7b31-40f6-bef8-fc76925be961"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.303579 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d2ee81-accb-4ff7-8fa6-52ed7d728258-kube-api-access-hn8b8" (OuterVolumeSpecName: "kube-api-access-hn8b8") pod "57d2ee81-accb-4ff7-8fa6-52ed7d728258" (UID: "57d2ee81-accb-4ff7-8fa6-52ed7d728258"). InnerVolumeSpecName "kube-api-access-hn8b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.306844 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "57d2ee81-accb-4ff7-8fa6-52ed7d728258" (UID: "57d2ee81-accb-4ff7-8fa6-52ed7d728258"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.306939 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.306989 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data podName:9bb9aa52-0895-418e-8e0b-d922948e85a7 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:04.306974464 +0000 UTC m=+1506.668416873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7") : configmap "rabbitmq-cell1-config-data" not found Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.308900 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57d2ee81-accb-4ff7-8fa6-52ed7d728258-config" (OuterVolumeSpecName: "config") pod "57d2ee81-accb-4ff7-8fa6-52ed7d728258" (UID: "57d2ee81-accb-4ff7-8fa6-52ed7d728258"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.311843 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.312279 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.314065 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.318776 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.318844 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.331129 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-config" (OuterVolumeSpecName: "config") pod "212a7372-7b31-40f6-bef8-fc76925be961" (UID: "212a7372-7b31-40f6-bef8-fc76925be961"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.334176 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.367922 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052ad7d6-6d71-4b3b-962a-db635b2df4a3-kube-api-access-zxws7" (OuterVolumeSpecName: "kube-api-access-zxws7") pod "052ad7d6-6d71-4b3b-962a-db635b2df4a3" (UID: "052ad7d6-6d71-4b3b-962a-db635b2df4a3"). InnerVolumeSpecName "kube-api-access-zxws7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.368683 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.368751 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376687 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn8b8\" (UniqueName: \"kubernetes.io/projected/57d2ee81-accb-4ff7-8fa6-52ed7d728258-kube-api-access-hn8b8\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376715 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376728 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxws7\" (UniqueName: \"kubernetes.io/projected/052ad7d6-6d71-4b3b-962a-db635b2df4a3-kube-api-access-zxws7\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376740 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d2ee81-accb-4ff7-8fa6-52ed7d728258-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376756 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376768 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376779 4890 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.376791 4890 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/57d2ee81-accb-4ff7-8fa6-52ed7d728258-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.426457 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defb5f2d-053c-4b32-beb1-d10d70bacce1" (UID: "defb5f2d-053c-4b32-beb1-d10d70bacce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.429563 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.441402 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.484175 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.484204 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.484214 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.488698 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "defb5f2d-053c-4b32-beb1-d10d70bacce1" (UID: "defb5f2d-053c-4b32-beb1-d10d70bacce1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.491851 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "052ad7d6-6d71-4b3b-962a-db635b2df4a3" (UID: "052ad7d6-6d71-4b3b-962a-db635b2df4a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.557549 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-config-data" (OuterVolumeSpecName: "config-data") pod "052ad7d6-6d71-4b3b-962a-db635b2df4a3" (UID: "052ad7d6-6d71-4b3b-962a-db635b2df4a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.588170 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.588198 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.588208 4890 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.588268 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.588318 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data podName:caae7093-b594-47fb-b863-38d825f0048d nodeName:}" failed. No retries permitted until 2026-01-21 15:57:04.588302194 +0000 UTC m=+1506.949744603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data") pod "rabbitmq-server-0" (UID: "caae7093-b594-47fb-b863-38d825f0048d") : configmap "rabbitmq-config-data" not found Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.590438 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "212a7372-7b31-40f6-bef8-fc76925be961" (UID: "212a7372-7b31-40f6-bef8-fc76925be961"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.599280 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.602254 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "defb5f2d-053c-4b32-beb1-d10d70bacce1" (UID: "defb5f2d-053c-4b32-beb1-d10d70bacce1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.603729 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "212a7372-7b31-40f6-bef8-fc76925be961" (UID: "212a7372-7b31-40f6-bef8-fc76925be961"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.604969 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "052ad7d6-6d71-4b3b-962a-db635b2df4a3" (UID: "052ad7d6-6d71-4b3b-962a-db635b2df4a3"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.608435 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5wh28_57d2ee81-accb-4ff7-8fa6-52ed7d728258/openstack-network-exporter/0.log" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.608623 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wh28" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.608834 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wh28" event={"ID":"57d2ee81-accb-4ff7-8fa6-52ed7d728258","Type":"ContainerDied","Data":"f682bc92c61a33cc5de65221dcbf0ebf1fde6e5e3faceb23a8305e76d84ec7f1"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.608889 4890 scope.go:117] "RemoveContainer" containerID="e1aa6bfb45b550829709119ceae8ae53f1b530480df2a6e2a81fbe2d0d43a190" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.613582 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4691-account-create-update-77kb4" event={"ID":"f3ca330d-0795-4c1d-8a5e-12df75f280ba","Type":"ContainerStarted","Data":"6e8e7c123b067cc133762a2ce16cbfa6868d7222a6de39ed4ee63d3dd8000a69"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.619138 4890 generic.go:334] "Generic (PLEG): container finished" podID="d009f76d-bc65-453c-a05f-29454314ab7a" containerID="4a223e2232d09a7902cafe5997f0744b43b30ca16b7805665ca1778aa131272b" exitCode=0 Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.619164 4890 generic.go:334] "Generic (PLEG): container finished" podID="d009f76d-bc65-453c-a05f-29454314ab7a" containerID="900fd7400b1809198eec2f87e30f7758ce7b4277e57b7f022112ff25e938935f" exitCode=0 Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.619207 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d44774fc-92wps" event={"ID":"d009f76d-bc65-453c-a05f-29454314ab7a","Type":"ContainerDied","Data":"4a223e2232d09a7902cafe5997f0744b43b30ca16b7805665ca1778aa131272b"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.619227 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d44774fc-92wps" event={"ID":"d009f76d-bc65-453c-a05f-29454314ab7a","Type":"ContainerDied","Data":"900fd7400b1809198eec2f87e30f7758ce7b4277e57b7f022112ff25e938935f"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.619238 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d44774fc-92wps" event={"ID":"d009f76d-bc65-453c-a05f-29454314ab7a","Type":"ContainerDied","Data":"042023fed31004470a13edc002d94f939e412a6bca85e4a079554400f7f2fce7"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.619247 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042023fed31004470a13edc002d94f939e412a6bca85e4a079554400f7f2fce7" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.623721 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "212a7372-7b31-40f6-bef8-fc76925be961" (UID: "212a7372-7b31-40f6-bef8-fc76925be961"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.632403 4890 generic.go:334] "Generic (PLEG): container finished" podID="052ad7d6-6d71-4b3b-962a-db635b2df4a3" containerID="03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874" exitCode=0 Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.632488 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.632521 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"052ad7d6-6d71-4b3b-962a-db635b2df4a3","Type":"ContainerDied","Data":"03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.632552 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"052ad7d6-6d71-4b3b-962a-db635b2df4a3","Type":"ContainerDied","Data":"7cfa8797413a609429b2f87219d319ac9d7bd8490afb2f5ac097d5fd89588e3b"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.652160 4890 generic.go:334] "Generic (PLEG): container finished" podID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerID="17017fb4db752be398957128e72379f1e6bbd55f2c985855c266996c3fbae23f" exitCode=0 Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.652303 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1463d4e1-9ed2-4f45-b473-a94d18a4156f","Type":"ContainerDied","Data":"17017fb4db752be398957128e72379f1e6bbd55f2c985855c266996c3fbae23f"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.652336 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1463d4e1-9ed2-4f45-b473-a94d18a4156f","Type":"ContainerDied","Data":"43a4af8e5cde4fc445ec725b05d317b2194127514557a6317245a2e736905b36"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.652360 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a4af8e5cde4fc445ec725b05d317b2194127514557a6317245a2e736905b36" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.659165 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.660559 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" event={"ID":"212a7372-7b31-40f6-bef8-fc76925be961","Type":"ContainerDied","Data":"02c0884537c3d2ac2df7cdf7b76d1c17bbd819da04a8cdb0f5e6ecb766b6b950"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.660680 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.661289 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57d2ee81-accb-4ff7-8fa6-52ed7d728258" (UID: "57d2ee81-accb-4ff7-8fa6-52ed7d728258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.678455 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.682865 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.682910 4890 generic.go:334] "Generic (PLEG): container finished" podID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerID="ee8636883cf7ef685bc793e2761b19d6a77deb5c7898b985a0cc704d99683d91" exitCode=0 Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.683006 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91b03ee9-0cb8-49eb-b3da-3d1c42e15720","Type":"ContainerDied","Data":"ee8636883cf7ef685bc793e2761b19d6a77deb5c7898b985a0cc704d99683d91"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.683041 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91b03ee9-0cb8-49eb-b3da-3d1c42e15720","Type":"ContainerDied","Data":"75966d5f0fc4cbeafe8e22e665477e5daae59c68c111d960daa8e1678d776b2c"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.683051 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75966d5f0fc4cbeafe8e22e665477e5daae59c68c111d960daa8e1678d776b2c" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.684537 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1136-account-create-update-bc5w2" event={"ID":"fa88b847-d54b-4e99-8dee-39c83f0a06d8","Type":"ContainerDied","Data":"69801fdd15c722ce03ed40c1d26686cac788b529eac6f8cff5dc67f90aa64bb2"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.684566 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69801fdd15c722ce03ed40c1d26686cac788b529eac6f8cff5dc67f90aa64bb2" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.686666 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "052ad7d6-6d71-4b3b-962a-db635b2df4a3" (UID: "052ad7d6-6d71-4b3b-962a-db635b2df4a3"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.686729 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3ab783d9-382b-4b61-85f0-f4a82160effe/ovsdbserver-nb/0.log" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.686838 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.687551 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ab783d9-382b-4b61-85f0-f4a82160effe","Type":"ContainerDied","Data":"b9855ec1171a6828e4b84ce1c79fef544afce75c5d28ca7615c5af5eaf77ad1e"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.689992 4890 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690017 4890 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/defb5f2d-053c-4b32-beb1-d10d70bacce1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690028 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690038 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690046 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690055 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690063 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212a7372-7b31-40f6-bef8-fc76925be961-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690071 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690079 4890 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.690088 4890 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/052ad7d6-6d71-4b3b-962a-db635b2df4a3-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.695503 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4e21e0c9-91df-4f87-a32f-30fa3d3fa874/ovsdbserver-sb/0.log" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.695555 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e21e0c9-91df-4f87-a32f-30fa3d3fa874","Type":"ContainerDied","Data":"5390a6179693f6250b519db66dffb623100123c375a8f1fc8ae8873748d538c9"} Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.695652 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.706598 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "4e21e0c9-91df-4f87-a32f-30fa3d3fa874" (UID: "4e21e0c9-91df-4f87-a32f-30fa3d3fa874"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.745114 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "3ab783d9-382b-4b61-85f0-f4a82160effe" (UID: "3ab783d9-382b-4b61-85f0-f4a82160effe"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.747383 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "57d2ee81-accb-4ff7-8fa6-52ed7d728258" (UID: "57d2ee81-accb-4ff7-8fa6-52ed7d728258"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.787281 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.791937 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-xdkgv"] Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.792815 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e21e0c9-91df-4f87-a32f-30fa3d3fa874-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.792840 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab783d9-382b-4b61-85f0-f4a82160effe-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.792849 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57d2ee81-accb-4ff7-8fa6-52ed7d728258-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.802819 4890 scope.go:117] "RemoveContainer" containerID="03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.803137 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-xdkgv"] Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.821870 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.821995 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.822227 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.839702 4890 scope.go:117] "RemoveContainer" containerID="03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.841569 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874\": container with ID starting with 03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874 not found: ID does not exist" containerID="03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.841613 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874"} err="failed to get container status \"03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874\": rpc error: code = NotFound desc = could not find container \"03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874\": container with ID starting with 03d57742b98aecd03ef6bd5f168e298a290dbaf2f93602eff90d874a2b90e874 not found: ID does not exist" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.841641 4890 scope.go:117] "RemoveContainer" containerID="e83ef076a5c80f27ea8f77e9616e9b721e5e3861579511656b623a4c8b0a184d" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.878895 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5z4qn"] Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879302 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879320 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879329 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerName="galera" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879335 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerName="galera" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879361 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="probe" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879368 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="probe" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879382 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="ovsdbserver-nb" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879394 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="ovsdbserver-nb" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879408 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879416 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879425 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="cinder-scheduler" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879433 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="cinder-scheduler" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879446 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="ovsdbserver-sb" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879454 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="ovsdbserver-sb" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879469 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052ad7d6-6d71-4b3b-962a-db635b2df4a3" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879476 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="052ad7d6-6d71-4b3b-962a-db635b2df4a3" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879489 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-httpd" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879496 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-httpd" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879507 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212a7372-7b31-40f6-bef8-fc76925be961" containerName="dnsmasq-dns" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879513 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="212a7372-7b31-40f6-bef8-fc76925be961" containerName="dnsmasq-dns" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879525 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-server" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879531 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-server" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879545 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d2ee81-accb-4ff7-8fa6-52ed7d728258" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879552 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d2ee81-accb-4ff7-8fa6-52ed7d728258" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879564 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerName="mysql-bootstrap" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879569 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerName="mysql-bootstrap" Jan 21 15:57:00 crc kubenswrapper[4890]: E0121 15:57:00.879581 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212a7372-7b31-40f6-bef8-fc76925be961" containerName="init" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879587 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="212a7372-7b31-40f6-bef8-fc76925be961" containerName="init" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879790 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="ovsdbserver-sb" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879809 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-server" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879823 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d2ee81-accb-4ff7-8fa6-52ed7d728258" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879829 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="052ad7d6-6d71-4b3b-962a-db635b2df4a3" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879840 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879847 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" containerName="ovsdbserver-nb" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879857 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" containerName="galera" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879867 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" containerName="openstack-network-exporter" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879876 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="212a7372-7b31-40f6-bef8-fc76925be961" containerName="dnsmasq-dns" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879885 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="cinder-scheduler" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879892 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" containerName="proxy-httpd" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.879901 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" containerName="probe" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.880666 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.883838 4890 scope.go:117] "RemoveContainer" containerID="3914c5aeda2235f7697d3a660d19be4755cff8e1c1da33467a876d8003763f55" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.887166 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.893939 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h69c\" (UniqueName: \"kubernetes.io/projected/fa88b847-d54b-4e99-8dee-39c83f0a06d8-kube-api-access-5h69c\") pod \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.893996 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-log-httpd\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894053 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-scripts\") pod \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894073 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-default\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894104 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjp2f\" (UniqueName: \"kubernetes.io/projected/1463d4e1-9ed2-4f45-b473-a94d18a4156f-kube-api-access-cjp2f\") pod \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894125 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-combined-ca-bundle\") pod \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894163 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-galera-tls-certs\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894214 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-combined-ca-bundle\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894231 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1463d4e1-9ed2-4f45-b473-a94d18a4156f-etc-machine-id\") pod \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894247 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-internal-tls-certs\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894264 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6qfr\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-kube-api-access-g6qfr\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894284 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data-custom\") pod \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894308 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-generated\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894325 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-etc-swift\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894371 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data\") pod \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\" (UID: \"1463d4e1-9ed2-4f45-b473-a94d18a4156f\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894389 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa88b847-d54b-4e99-8dee-39c83f0a06d8-operator-scripts\") pod \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\" (UID: \"fa88b847-d54b-4e99-8dee-39c83f0a06d8\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894416 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kolla-config\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894445 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-operator-scripts\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894460 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-combined-ca-bundle\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894479 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-public-tls-certs\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894502 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq8rp\" (UniqueName: \"kubernetes.io/projected/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kube-api-access-zq8rp\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894525 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-run-httpd\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894570 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-config-data\") pod \"d009f76d-bc65-453c-a05f-29454314ab7a\" (UID: \"d009f76d-bc65-453c-a05f-29454314ab7a\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.894589 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\" (UID: \"91b03ee9-0cb8-49eb-b3da-3d1c42e15720\") " Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.896108 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1463d4e1-9ed2-4f45-b473-a94d18a4156f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1463d4e1-9ed2-4f45-b473-a94d18a4156f" (UID: "1463d4e1-9ed2-4f45-b473-a94d18a4156f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.902693 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-scripts" (OuterVolumeSpecName: "scripts") pod "1463d4e1-9ed2-4f45-b473-a94d18a4156f" (UID: "1463d4e1-9ed2-4f45-b473-a94d18a4156f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.902937 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.903919 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.904306 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.904772 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa88b847-d54b-4e99-8dee-39c83f0a06d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa88b847-d54b-4e99-8dee-39c83f0a06d8" (UID: "fa88b847-d54b-4e99-8dee-39c83f0a06d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.905027 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.906519 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.909159 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1463d4e1-9ed2-4f45-b473-a94d18a4156f" (UID: "1463d4e1-9ed2-4f45-b473-a94d18a4156f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.912516 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5z4qn"] Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.913750 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa88b847-d54b-4e99-8dee-39c83f0a06d8-kube-api-access-5h69c" (OuterVolumeSpecName: "kube-api-access-5h69c") pod "fa88b847-d54b-4e99-8dee-39c83f0a06d8" (UID: "fa88b847-d54b-4e99-8dee-39c83f0a06d8"). InnerVolumeSpecName "kube-api-access-5h69c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.915855 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.917576 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.924282 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1463d4e1-9ed2-4f45-b473-a94d18a4156f-kube-api-access-cjp2f" (OuterVolumeSpecName: "kube-api-access-cjp2f") pod "1463d4e1-9ed2-4f45-b473-a94d18a4156f" (UID: "1463d4e1-9ed2-4f45-b473-a94d18a4156f"). InnerVolumeSpecName "kube-api-access-cjp2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: W0121 15:57:00.927081 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86742085_590c_4ce5_b694_8a91a90c0b6f.slice/crio-75d5bd667ec839ed05c5c7f5e39e1a3ad37b1cce4bbf9250306b0f23226f3286 WatchSource:0}: Error finding container 75d5bd667ec839ed05c5c7f5e39e1a3ad37b1cce4bbf9250306b0f23226f3286: Status 404 returned error can't find the container with id 75d5bd667ec839ed05c5c7f5e39e1a3ad37b1cce4bbf9250306b0f23226f3286 Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.933604 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kube-api-access-zq8rp" (OuterVolumeSpecName: "kube-api-access-zq8rp") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "kube-api-access-zq8rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.950343 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-de17-account-create-update-crzrr"] Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.960836 4890 scope.go:117] "RemoveContainer" containerID="17e25bf33dcd118f48a8e8f7cae037f543abe8f9a7ffe1c912b57bf6e4df359b" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.989978 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-kube-api-access-g6qfr" (OuterVolumeSpecName: "kube-api-access-g6qfr") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "kube-api-access-g6qfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:00 crc kubenswrapper[4890]: I0121 15:57:00.994171 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.002441 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqf84\" (UniqueName: \"kubernetes.io/projected/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-kube-api-access-dqf84\") pod \"root-account-create-update-5z4qn\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.002577 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts\") pod \"root-account-create-update-5z4qn\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.003013 4890 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:57:01 crc kubenswrapper[4890]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: if [ -n "neutron" ]; then Jan 21 15:57:01 crc kubenswrapper[4890]: GRANT_DATABASE="neutron" Jan 21 15:57:01 crc kubenswrapper[4890]: else Jan 21 15:57:01 crc kubenswrapper[4890]: GRANT_DATABASE="*" Jan 21 15:57:01 crc kubenswrapper[4890]: fi Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: # going for maximum compatibility here: Jan 21 15:57:01 crc kubenswrapper[4890]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 15:57:01 crc kubenswrapper[4890]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 15:57:01 crc kubenswrapper[4890]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 15:57:01 crc kubenswrapper[4890]: # support updates Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: $MYSQL_CMD < logger="UnhandledError" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.007758 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010125 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010383 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010399 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010411 4890 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010419 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa88b847-d54b-4e99-8dee-39c83f0a06d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010428 4890 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010437 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010445 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq8rp\" (UniqueName: \"kubernetes.io/projected/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-kube-api-access-zq8rp\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010453 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010462 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h69c\" (UniqueName: \"kubernetes.io/projected/fa88b847-d54b-4e99-8dee-39c83f0a06d8-kube-api-access-5h69c\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010469 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d009f76d-bc65-453c-a05f-29454314ab7a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010479 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010487 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010496 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjp2f\" (UniqueName: \"kubernetes.io/projected/1463d4e1-9ed2-4f45-b473-a94d18a4156f-kube-api-access-cjp2f\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010508 4890 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1463d4e1-9ed2-4f45-b473-a94d18a4156f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.010517 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6qfr\" (UniqueName: \"kubernetes.io/projected/d009f76d-bc65-453c-a05f-29454314ab7a-kube-api-access-g6qfr\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.010607 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-de17-account-create-update-crzrr" podUID="86742085-590c-4ce5-b694-8a91a90c0b6f" Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.011163 4890 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:57:01 crc kubenswrapper[4890]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: if [ -n "" ]; then Jan 21 15:57:01 crc kubenswrapper[4890]: GRANT_DATABASE="" Jan 21 15:57:01 crc kubenswrapper[4890]: else Jan 21 15:57:01 crc kubenswrapper[4890]: GRANT_DATABASE="*" Jan 21 15:57:01 crc kubenswrapper[4890]: fi Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: # going for maximum compatibility here: Jan 21 15:57:01 crc kubenswrapper[4890]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 15:57:01 crc kubenswrapper[4890]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 15:57:01 crc kubenswrapper[4890]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 15:57:01 crc kubenswrapper[4890]: # support updates Jan 21 15:57:01 crc kubenswrapper[4890]: Jan 21 15:57:01 crc kubenswrapper[4890]: $MYSQL_CMD < logger="UnhandledError" Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.011443 4890 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 15:57:01 crc kubenswrapper[4890]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-21T15:56:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 15:57:01 crc kubenswrapper[4890]: /etc/init.d/functions: line 589: 428 Alarm clock "$@" Jan 21 15:57:01 crc kubenswrapper[4890]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-pmrch" message=< Jan 21 15:57:01 crc kubenswrapper[4890]: Exiting ovn-controller (1) [FAILED] Jan 21 15:57:01 crc kubenswrapper[4890]: Killing ovn-controller (1) [ OK ] Jan 21 15:57:01 crc kubenswrapper[4890]: 2026-01-21T15:56:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 15:57:01 crc kubenswrapper[4890]: /etc/init.d/functions: line 589: 428 Alarm clock "$@" Jan 21 15:57:01 crc kubenswrapper[4890]: > Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.011469 4890 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 15:57:01 crc kubenswrapper[4890]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-21T15:56:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 15:57:01 crc kubenswrapper[4890]: /etc/init.d/functions: line 589: 428 Alarm clock "$@" Jan 21 15:57:01 crc kubenswrapper[4890]: > pod="openstack/ovn-controller-pmrch" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerName="ovn-controller" containerID="cri-o://fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.011539 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-pmrch" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerName="ovn-controller" containerID="cri-o://fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9" gracePeriod=27 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.011555 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pmrch" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerName="ovn-controller" probeResult="failure" output="" Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.012676 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-p8whv" podUID="ef1ee1ae-c8ba-469c-ad49-896510b81e81" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.013323 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p8whv"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.059973 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-5wh28"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.068736 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-5wh28"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.070938 4890 scope.go:117] "RemoveContainer" containerID="f816fbeb470ee262ad039181a4ae9efe8ea0d75924ce11d2ac8682922df4c451" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.081861 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.088590 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.115084 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.115206 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts\") pod \"root-account-create-update-5z4qn\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.117404 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqf84\" (UniqueName: \"kubernetes.io/projected/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-kube-api-access-dqf84\") pod \"root-account-create-update-5z4qn\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.117550 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.121886 4890 scope.go:117] "RemoveContainer" containerID="7bb813a96df430cf730cbd1dfe5dc4203c97638b30dd1b67143a66968a5d4d09" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.126002 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.157009 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqf84\" (UniqueName: \"kubernetes.io/projected/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-kube-api-access-dqf84\") pod \"root-account-create-update-5z4qn\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.159804 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts\") pod \"root-account-create-update-5z4qn\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.161771 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.166667 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.175808 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.180505 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1463d4e1-9ed2-4f45-b473-a94d18a4156f" (UID: "1463d4e1-9ed2-4f45-b473-a94d18a4156f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.184129 4890 scope.go:117] "RemoveContainer" containerID="9c227e45c94f7742e46f8728f499fa534251a81e5033658fec415f426bd7319e" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.192576 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-config-data" (OuterVolumeSpecName: "config-data") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.192608 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "91b03ee9-0cb8-49eb-b3da-3d1c42e15720" (UID: "91b03ee9-0cb8-49eb-b3da-3d1c42e15720"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.192673 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.212379 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.219201 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.219223 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.219238 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.219249 4890 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.219259 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b03ee9-0cb8-49eb-b3da-3d1c42e15720-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.255958 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.257952 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.278391 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d009f76d-bc65-453c-a05f-29454314ab7a" (UID: "d009f76d-bc65-453c-a05f-29454314ab7a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.291007 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data" (OuterVolumeSpecName: "config-data") pod "1463d4e1-9ed2-4f45-b473-a94d18a4156f" (UID: "1463d4e1-9ed2-4f45-b473-a94d18a4156f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.320902 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.320959 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1463d4e1-9ed2-4f45-b473-a94d18a4156f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.320971 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d009f76d-bc65-453c-a05f-29454314ab7a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.320985 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.393898 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.408275 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.414464 4890 scope.go:117] "RemoveContainer" containerID="b4c0d71f6821be5944ba5656a2783cabaaf5a89a20f0ae7f0f33f828e00b0bc0" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.523813 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsmxh\" (UniqueName: \"kubernetes.io/projected/f3ca330d-0795-4c1d-8a5e-12df75f280ba-kube-api-access-vsmxh\") pod \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524316 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ca330d-0795-4c1d-8a5e-12df75f280ba-operator-scripts\") pod \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\" (UID: \"f3ca330d-0795-4c1d-8a5e-12df75f280ba\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524372 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-combined-ca-bundle\") pod \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524411 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-config-data\") pod \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524505 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-internal-tls-certs\") pod \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524554 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-public-tls-certs\") pod \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524601 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a25c82c-f72c-4ecb-a760-a568761bd5f2-logs\") pod \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524656 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxz48\" (UniqueName: \"kubernetes.io/projected/2a25c82c-f72c-4ecb-a760-a568761bd5f2-kube-api-access-nxz48\") pod \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.524762 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-scripts\") pod \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\" (UID: \"2a25c82c-f72c-4ecb-a760-a568761bd5f2\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.526214 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ca330d-0795-4c1d-8a5e-12df75f280ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3ca330d-0795-4c1d-8a5e-12df75f280ba" (UID: "f3ca330d-0795-4c1d-8a5e-12df75f280ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.526906 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a25c82c-f72c-4ecb-a760-a568761bd5f2-logs" (OuterVolumeSpecName: "logs") pod "2a25c82c-f72c-4ecb-a760-a568761bd5f2" (UID: "2a25c82c-f72c-4ecb-a760-a568761bd5f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.535073 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ca330d-0795-4c1d-8a5e-12df75f280ba-kube-api-access-vsmxh" (OuterVolumeSpecName: "kube-api-access-vsmxh") pod "f3ca330d-0795-4c1d-8a5e-12df75f280ba" (UID: "f3ca330d-0795-4c1d-8a5e-12df75f280ba"). InnerVolumeSpecName "kube-api-access-vsmxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.540376 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-scripts" (OuterVolumeSpecName: "scripts") pod "2a25c82c-f72c-4ecb-a760-a568761bd5f2" (UID: "2a25c82c-f72c-4ecb-a760-a568761bd5f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.540505 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a25c82c-f72c-4ecb-a760-a568761bd5f2-kube-api-access-nxz48" (OuterVolumeSpecName: "kube-api-access-nxz48") pod "2a25c82c-f72c-4ecb-a760-a568761bd5f2" (UID: "2a25c82c-f72c-4ecb-a760-a568761bd5f2"). InnerVolumeSpecName "kube-api-access-nxz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.594614 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a25c82c-f72c-4ecb-a760-a568761bd5f2" (UID: "2a25c82c-f72c-4ecb-a760-a568761bd5f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.595438 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-config-data" (OuterVolumeSpecName: "config-data") pod "2a25c82c-f72c-4ecb-a760-a568761bd5f2" (UID: "2a25c82c-f72c-4ecb-a760-a568761bd5f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.627946 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a25c82c-f72c-4ecb-a760-a568761bd5f2-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.628020 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxz48\" (UniqueName: \"kubernetes.io/projected/2a25c82c-f72c-4ecb-a760-a568761bd5f2-kube-api-access-nxz48\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.628030 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.628058 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsmxh\" (UniqueName: \"kubernetes.io/projected/f3ca330d-0795-4c1d-8a5e-12df75f280ba-kube-api-access-vsmxh\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.628067 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ca330d-0795-4c1d-8a5e-12df75f280ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.628085 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.628114 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.642844 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2a25c82c-f72c-4ecb-a760-a568761bd5f2" (UID: "2a25c82c-f72c-4ecb-a760-a568761bd5f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.652911 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.653220 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-central-agent" containerID="cri-o://2660277560aad838dbebdfb2cd900cfc69db1d476e814c29ad6367cf3448c4ee" gracePeriod=30 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.653717 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="proxy-httpd" containerID="cri-o://b1850cb5e39351073cf39f1d0e88018e7526c6b8091783f112754a2815cb88bf" gracePeriod=30 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.653774 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="sg-core" containerID="cri-o://e6874597d1e13caa14de2a102072cb91ab0359d88ae4e3beb3a5adaa31d395bd" gracePeriod=30 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.653824 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-notification-agent" containerID="cri-o://bdcf29add7cbc483a28d49a26883018699ca78c8f8bcfbac6388fbdd8fd5c94b" gracePeriod=30 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.674297 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:37866->10.217.0.204:8775: read: connection reset by peer" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.674467 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:37878->10.217.0.204:8775: read: connection reset by peer" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.686890 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2a25c82c-f72c-4ecb-a760-a568761bd5f2" (UID: "2a25c82c-f72c-4ecb-a760-a568761bd5f2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.704780 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.705194 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="736df6ca-1308-4f87-a39e-7aca6ad4d5a1" containerName="kube-state-metrics" containerID="cri-o://612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54" gracePeriod=30 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.730635 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.730670 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a25c82c-f72c-4ecb-a760-a568761bd5f2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.797904 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pmrch_cdd2d089-a1a5-4e25-920a-a485d0fd319f/ovn-controller/0.log" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.797971 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.798414 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-358a-account-create-update-6m47f"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.850203 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-358a-account-create-update-6m47f"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.856820 4890 generic.go:334] "Generic (PLEG): container finished" podID="697e1d3a-fab0-471b-bea8-43212f489fec" containerID="9e0291aac0c698ccda6b3ca51011fe12c6a3dfe3353a4fd388da9648e8a82def" exitCode=0 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.856942 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"697e1d3a-fab0-471b-bea8-43212f489fec","Type":"ContainerDied","Data":"9e0291aac0c698ccda6b3ca51011fe12c6a3dfe3353a4fd388da9648e8a82def"} Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.885810 4890 generic.go:334] "Generic (PLEG): container finished" podID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerID="1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1" exitCode=0 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.885872 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-688fbc5db-f9csp" event={"ID":"2a25c82c-f72c-4ecb-a760-a568761bd5f2","Type":"ContainerDied","Data":"1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1"} Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.885898 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-688fbc5db-f9csp" event={"ID":"2a25c82c-f72c-4ecb-a760-a568761bd5f2","Type":"ContainerDied","Data":"088703b7d204342f58913c79423558d243e3908868214c9cb4562bed6d264701"} Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.885915 4890 scope.go:117] "RemoveContainer" containerID="1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.886038 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-688fbc5db-f9csp" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.902166 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.902456 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="770a4f11-b2a3-46fd-a06d-3af27edd3d9f" containerName="memcached" containerID="cri-o://58a6039e13c8c21e15265060210fffee78adef95926968b02f31c6424dc6e4e2" gracePeriod=30 Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.933094 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6946c9f5b4-2l82t" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:59956->10.217.0.167:9311: read: connection reset by peer" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.933329 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6946c9f5b4-2l82t" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:59966->10.217.0.167:9311: read: connection reset by peer" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.933996 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run-ovn\") pod \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.934233 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-ovn-controller-tls-certs\") pod \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.934321 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdd2d089-a1a5-4e25-920a-a485d0fd319f-scripts\") pod \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.934421 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cdd2d089-a1a5-4e25-920a-a485d0fd319f" (UID: "cdd2d089-a1a5-4e25-920a-a485d0fd319f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.934545 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbk87\" (UniqueName: \"kubernetes.io/projected/cdd2d089-a1a5-4e25-920a-a485d0fd319f-kube-api-access-qbk87\") pod \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.934632 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-combined-ca-bundle\") pod \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.934699 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run\") pod \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.934757 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-log-ovn\") pod \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\" (UID: \"cdd2d089-a1a5-4e25-920a-a485d0fd319f\") " Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.935205 4890 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.935481 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cdd2d089-a1a5-4e25-920a-a485d0fd319f" (UID: "cdd2d089-a1a5-4e25-920a-a485d0fd319f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.936703 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run" (OuterVolumeSpecName: "var-run") pod "cdd2d089-a1a5-4e25-920a-a485d0fd319f" (UID: "cdd2d089-a1a5-4e25-920a-a485d0fd319f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.936881 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd2d089-a1a5-4e25-920a-a485d0fd319f-scripts" (OuterVolumeSpecName: "scripts") pod "cdd2d089-a1a5-4e25-920a-a485d0fd319f" (UID: "cdd2d089-a1a5-4e25-920a-a485d0fd319f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.941768 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd2d089-a1a5-4e25-920a-a485d0fd319f-kube-api-access-qbk87" (OuterVolumeSpecName: "kube-api-access-qbk87") pod "cdd2d089-a1a5-4e25-920a-a485d0fd319f" (UID: "cdd2d089-a1a5-4e25-920a-a485d0fd319f"). InnerVolumeSpecName "kube-api-access-qbk87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.951795 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.970837 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="caae7093-b594-47fb-b863-38d825f0048d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.972813 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052ad7d6-6d71-4b3b-962a-db635b2df4a3" path="/var/lib/kubelet/pods/052ad7d6-6d71-4b3b-962a-db635b2df4a3/volumes" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.975719 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="212a7372-7b31-40f6-bef8-fc76925be961" path="/var/lib/kubelet/pods/212a7372-7b31-40f6-bef8-fc76925be961/volumes" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.976611 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab783d9-382b-4b61-85f0-f4a82160effe" path="/var/lib/kubelet/pods/3ab783d9-382b-4b61-85f0-f4a82160effe/volumes" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.977230 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac05403-ccde-41a4-9312-7c536af1825d" path="/var/lib/kubelet/pods/3ac05403-ccde-41a4-9312-7c536af1825d/volumes" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.979117 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e21e0c9-91df-4f87-a32f-30fa3d3fa874" path="/var/lib/kubelet/pods/4e21e0c9-91df-4f87-a32f-30fa3d3fa874/volumes" Jan 21 15:57:01 crc kubenswrapper[4890]: E0121 15:57:01.981143 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.981743 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d2ee81-accb-4ff7-8fa6-52ed7d728258" path="/var/lib/kubelet/pods/57d2ee81-accb-4ff7-8fa6-52ed7d728258/volumes" Jan 21 15:57:01 crc kubenswrapper[4890]: I0121 15:57:01.982608 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defb5f2d-053c-4b32-beb1-d10d70bacce1" path="/var/lib/kubelet/pods/defb5f2d-053c-4b32-beb1-d10d70bacce1/volumes" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.000610 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.000685 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="ovn-northd" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.037788 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdd2d089-a1a5-4e25-920a-a485d0fd319f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.037829 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbk87\" (UniqueName: \"kubernetes.io/projected/cdd2d089-a1a5-4e25-920a-a485d0fd319f-kube-api-access-qbk87\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.037843 4890 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.037856 4890 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdd2d089-a1a5-4e25-920a-a485d0fd319f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.048376 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4691-account-create-update-77kb4" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.055699 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdd2d089-a1a5-4e25-920a-a485d0fd319f" (UID: "cdd2d089-a1a5-4e25-920a-a485d0fd319f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.082020 4890 generic.go:334] "Generic (PLEG): container finished" podID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerID="449855515a900befe1127318232d23ee1ce08ab1fc81e724dd3ee85e1bdccca0" exitCode=0 Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.102554 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "cdd2d089-a1a5-4e25-920a-a485d0fd319f" (UID: "cdd2d089-a1a5-4e25-920a-a485d0fd319f"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: W0121 15:57:02.111475 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb402af9c_655e_4cd8_91a4_f9ff4f8ef671.slice/crio-e8ea9add3c2b9995d32dc12e3136a8383f758efa51a2c9ece0a3e2cad5a26755 WatchSource:0}: Error finding container e8ea9add3c2b9995d32dc12e3136a8383f758efa51a2c9ece0a3e2cad5a26755: Status 404 returned error can't find the container with id e8ea9add3c2b9995d32dc12e3136a8383f758efa51a2c9ece0a3e2cad5a26755 Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.111735 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.121484 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.140686 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.140751 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2780ff06-b30a-43e8-97d5-b9477d2713d6" containerName="nova-scheduler-scheduler" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.146020 4890 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.147846 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd2d089-a1a5-4e25-920a-a485d0fd319f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.148649 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p8whv" event={"ID":"ef1ee1ae-c8ba-469c-ad49-896510b81e81","Type":"ContainerStarted","Data":"cbfaafd60396a31b01efc8d8526cdb1f72e3b5960589d2afa866b3631242e11b"} Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.148700 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5z4qn"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.148724 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-de17-account-create-update-crzrr" event={"ID":"86742085-590c-4ce5-b694-8a91a90c0b6f","Type":"ContainerStarted","Data":"75d5bd667ec839ed05c5c7f5e39e1a3ad37b1cce4bbf9250306b0f23226f3286"} Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.148739 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4691-account-create-update-77kb4" event={"ID":"f3ca330d-0795-4c1d-8a5e-12df75f280ba","Type":"ContainerDied","Data":"6e8e7c123b067cc133762a2ce16cbfa6868d7222a6de39ed4ee63d3dd8000a69"} Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.148754 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-358a-account-create-update-gzppl"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.149104 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-api" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.149116 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-api" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.149143 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-log" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.149149 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-log" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.149162 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerName="ovn-controller" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.149168 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerName="ovn-controller" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.149335 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-log" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.149428 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerName="ovn-controller" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.149450 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" containerName="placement-api" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150019 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-358a-account-create-update-gzppl"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150037 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e775a69e-619f-4920-8fc9-6d216e400c0e","Type":"ContainerDied","Data":"449855515a900befe1127318232d23ee1ce08ab1fc81e724dd3ee85e1bdccca0"} Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150057 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-n8mlq"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150073 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-n8mlq"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150087 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4x97f"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150098 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4x97f"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150110 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5d6cd7788b-hrbst"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.150301 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5d6cd7788b-hrbst" podUID="db0e4f67-3406-4153-9fb3-3553f6fccad1" containerName="keystone-api" containerID="cri-o://ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f" gracePeriod=30 Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.151506 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.171263 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.192103 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.198569 4890 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:57:02 crc kubenswrapper[4890]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 15:57:02 crc kubenswrapper[4890]: Jan 21 15:57:02 crc kubenswrapper[4890]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 15:57:02 crc kubenswrapper[4890]: Jan 21 15:57:02 crc kubenswrapper[4890]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 15:57:02 crc kubenswrapper[4890]: Jan 21 15:57:02 crc kubenswrapper[4890]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 15:57:02 crc kubenswrapper[4890]: Jan 21 15:57:02 crc kubenswrapper[4890]: if [ -n "" ]; then Jan 21 15:57:02 crc kubenswrapper[4890]: GRANT_DATABASE="" Jan 21 15:57:02 crc kubenswrapper[4890]: else Jan 21 15:57:02 crc kubenswrapper[4890]: GRANT_DATABASE="*" Jan 21 15:57:02 crc kubenswrapper[4890]: fi Jan 21 15:57:02 crc kubenswrapper[4890]: Jan 21 15:57:02 crc kubenswrapper[4890]: # going for maximum compatibility here: Jan 21 15:57:02 crc kubenswrapper[4890]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 15:57:02 crc kubenswrapper[4890]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 15:57:02 crc kubenswrapper[4890]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 15:57:02 crc kubenswrapper[4890]: # support updates Jan 21 15:57:02 crc kubenswrapper[4890]: Jan 21 15:57:02 crc kubenswrapper[4890]: $MYSQL_CMD < logger="UnhandledError" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.198742 4890 scope.go:117] "RemoveContainer" containerID="98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.200066 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-5z4qn" podUID="b402af9c-655e-4cd8-91a4-f9ff4f8ef671" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206305 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pmrch_cdd2d089-a1a5-4e25-920a-a485d0fd319f/ovn-controller/0.log" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206382 4890 generic.go:334] "Generic (PLEG): container finished" podID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" containerID="fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9" exitCode=143 Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206496 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64d44774fc-92wps" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206497 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1136-account-create-update-bc5w2" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206560 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-v5vck"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206586 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch" event={"ID":"cdd2d089-a1a5-4e25-920a-a485d0fd319f","Type":"ContainerDied","Data":"fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9"} Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206613 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pmrch" event={"ID":"cdd2d089-a1a5-4e25-920a-a485d0fd319f","Type":"ContainerDied","Data":"2d4edbfe177c7ef30092237586acb85285f53fc0a2d9ab2be6f550b1b2daf014"} Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206653 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pmrch" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.206712 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.207715 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.217403 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-358a-account-create-update-gzppl"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.223009 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-v5vck"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.278241 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.278459 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2tq9\" (UniqueName: \"kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.361882 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5z4qn"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.386911 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.400834 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.400976 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tq9\" (UniqueName: \"kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.401535 4890 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.401592 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts podName:abe082f6-090f-4887-ab4a-cee13a8ad2a2 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:02.901570935 +0000 UTC m=+1505.263013334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts") pod "keystone-358a-account-create-update-gzppl" (UID: "abe082f6-090f-4887-ab4a-cee13a8ad2a2") : configmap "openstack-scripts" not found Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.407426 4890 projected.go:194] Error preparing data for projected volume kube-api-access-n2tq9 for pod openstack/keystone-358a-account-create-update-gzppl: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.407499 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9 podName:abe082f6-090f-4887-ab4a-cee13a8ad2a2 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:02.907481142 +0000 UTC m=+1505.268923551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n2tq9" (UniqueName: "kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9") pod "keystone-358a-account-create-update-gzppl" (UID: "abe082f6-090f-4887-ab4a-cee13a8ad2a2") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.492424 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4691-account-create-update-77kb4"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.494576 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4691-account-create-update-77kb4"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.502279 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-688fbc5db-f9csp"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.509055 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-688fbc5db-f9csp"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.535984 4890 scope.go:117] "RemoveContainer" containerID="1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.536867 4890 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod736df6ca_1308_4f87_a39e_7aca6ad4d5a1.slice/crio-612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff7b18fd_53f0_48dc_84ae_d706234668f7.slice/crio-b1850cb5e39351073cf39f1d0e88018e7526c6b8091783f112754a2815cb88bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod736df6ca_1308_4f87_a39e_7aca6ad4d5a1.slice/crio-conmon-612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3ca330d_0795_4c1d_8a5e_12df75f280ba.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff7b18fd_53f0_48dc_84ae_d706234668f7.slice/crio-conmon-b1850cb5e39351073cf39f1d0e88018e7526c6b8091783f112754a2815cb88bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a25c82c_f72c_4ecb_a760_a568761bd5f2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a25c82c_f72c_4ecb_a760_a568761bd5f2.slice/crio-088703b7d204342f58913c79423558d243e3908868214c9cb4562bed6d264701\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod770a4f11_b2a3_46fd_a06d_3af27edd3d9f.slice/crio-58a6039e13c8c21e15265060210fffee78adef95926968b02f31c6424dc6e4e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff7b18fd_53f0_48dc_84ae_d706234668f7.slice/crio-2660277560aad838dbebdfb2cd900cfc69db1d476e814c29ad6367cf3448c4ee.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.539686 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1\": container with ID starting with 1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1 not found: ID does not exist" containerID="1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.539714 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1"} err="failed to get container status \"1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1\": rpc error: code = NotFound desc = could not find container \"1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1\": container with ID starting with 1771c90ff6c557a7085013cb5fe524e2692c5897ee5596f9033569f4a0dcacd1 not found: ID does not exist" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.539737 4890 scope.go:117] "RemoveContainer" containerID="98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.540190 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.540506 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac\": container with ID starting with 98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac not found: ID does not exist" containerID="98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.540549 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac"} err="failed to get container status \"98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac\": rpc error: code = NotFound desc = could not find container \"98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac\": container with ID starting with 98e90a9f7101e2b7931a2d1c67fd13848b6128b5affd0b0f55bfc72d31361fac not found: ID does not exist" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.540575 4890 scope.go:117] "RemoveContainer" containerID="fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.568055 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.570957 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-n2tq9 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-358a-account-create-update-gzppl" podUID="abe082f6-090f-4887-ab4a-cee13a8ad2a2" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.572067 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerName="galera" containerID="cri-o://eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" gracePeriod=30 Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.591959 4890 scope.go:117] "RemoveContainer" containerID="fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.593196 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9\": container with ID starting with fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9 not found: ID does not exist" containerID="fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.593248 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9"} err="failed to get container status \"fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9\": rpc error: code = NotFound desc = could not find container \"fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9\": container with ID starting with fd4211f21b253870e3fae40977a03d9c49c9c2b0f158923f686fac957639d5b9 not found: ID does not exist" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.595425 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.607271 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608026 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-scripts\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608086 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-combined-ca-bundle\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608312 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47dnj\" (UniqueName: \"kubernetes.io/projected/e775a69e-619f-4920-8fc9-6d216e400c0e-kube-api-access-47dnj\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608365 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-public-tls-certs\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608400 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-httpd-run\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608427 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-logs\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608494 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608519 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-config-data\") pod \"e775a69e-619f-4920-8fc9-6d216e400c0e\" (UID: \"e775a69e-619f-4920-8fc9-6d216e400c0e\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.608807 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.609210 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.609707 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-logs" (OuterVolumeSpecName: "logs") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.612658 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.612889 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-scripts" (OuterVolumeSpecName: "scripts") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.615198 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.625585 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.626769 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e775a69e-619f-4920-8fc9-6d216e400c0e-kube-api-access-47dnj" (OuterVolumeSpecName: "kube-api-access-47dnj") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "kube-api-access-47dnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.633320 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64d44774fc-92wps"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.641386 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-64d44774fc-92wps"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.670998 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pmrch"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.684649 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pmrch"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.687930 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-config-data" (OuterVolumeSpecName: "config-data") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.704293 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.704364 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e775a69e-619f-4920-8fc9-6d216e400c0e" (UID: "e775a69e-619f-4920-8fc9-6d216e400c0e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710054 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-httpd-run\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710124 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-scripts\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710172 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-combined-ca-bundle\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710216 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-config-data\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710264 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710299 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp7pc\" (UniqueName: \"kubernetes.io/projected/697e1d3a-fab0-471b-bea8-43212f489fec-kube-api-access-qp7pc\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710334 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-logs\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710471 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-internal-tls-certs\") pod \"697e1d3a-fab0-471b-bea8-43212f489fec\" (UID: \"697e1d3a-fab0-471b-bea8-43212f489fec\") " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710835 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47dnj\" (UniqueName: \"kubernetes.io/projected/e775a69e-619f-4920-8fc9-6d216e400c0e-kube-api-access-47dnj\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710845 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710854 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e775a69e-619f-4920-8fc9-6d216e400c0e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710863 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710882 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710890 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.710898 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e775a69e-619f-4920-8fc9-6d216e400c0e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.712852 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.715234 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-logs" (OuterVolumeSpecName: "logs") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.716394 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-scripts" (OuterVolumeSpecName: "scripts") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.716861 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1136-account-create-update-bc5w2"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.722614 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1136-account-create-update-bc5w2"] Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.725260 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697e1d3a-fab0-471b-bea8-43212f489fec-kube-api-access-qp7pc" (OuterVolumeSpecName: "kube-api-access-qp7pc") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "kube-api-access-qp7pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.727135 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.735250 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.750731 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.785602 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-config-data" (OuterVolumeSpecName: "config-data") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812515 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812558 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp7pc\" (UniqueName: \"kubernetes.io/projected/697e1d3a-fab0-471b-bea8-43212f489fec-kube-api-access-qp7pc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812572 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812583 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812594 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/697e1d3a-fab0-471b-bea8-43212f489fec-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812606 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812616 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.812627 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.842522 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "697e1d3a-fab0-471b-bea8-43212f489fec" (UID: "697e1d3a-fab0-471b-bea8-43212f489fec"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.855584 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.903861 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.914995 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tq9\" (UniqueName: \"kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.915082 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.915131 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e1d3a-fab0-471b-bea8-43212f489fec-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.915143 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.915193 4890 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.915235 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts podName:abe082f6-090f-4887-ab4a-cee13a8ad2a2 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:03.915222477 +0000 UTC m=+1506.276664886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts") pod "keystone-358a-account-create-update-gzppl" (UID: "abe082f6-090f-4887-ab4a-cee13a8ad2a2") : configmap "openstack-scripts" not found Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.917019 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.918614 4890 projected.go:194] Error preparing data for projected volume kube-api-access-n2tq9 for pod openstack/keystone-358a-account-create-update-gzppl: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.918649 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9 podName:abe082f6-090f-4887-ab4a-cee13a8ad2a2 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:03.918637292 +0000 UTC m=+1506.280079701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n2tq9" (UniqueName: "kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9") pod "keystone-358a-account-create-update-gzppl" (UID: "abe082f6-090f-4887-ab4a-cee13a8ad2a2") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.928424 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4 is running failed: container process not found" containerID="90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.931168 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4 is running failed: container process not found" containerID="90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.931875 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4 is running failed: container process not found" containerID="90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 15:57:02 crc kubenswrapper[4890]: E0121 15:57:02.931915 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="50c99515-8e62-4e54-9ffc-e9294db2dc4f" containerName="nova-cell0-conductor-conductor" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.933538 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.964466 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.972107 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:57:02 crc kubenswrapper[4890]: I0121 15:57:02.982983 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016660 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-combined-ca-bundle\") pod \"33bbda2a-fde6-466f-92c8-88556941b8a3\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016707 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-public-tls-certs\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016731 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371fefce-bb16-4c48-ac5a-01885e77c090-logs\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016759 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqpmr\" (UniqueName: \"kubernetes.io/projected/371fefce-bb16-4c48-ac5a-01885e77c090-kube-api-access-nqpmr\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016786 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016815 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-public-tls-certs\") pod \"33bbda2a-fde6-466f-92c8-88556941b8a3\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016843 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data-custom\") pod \"33bbda2a-fde6-466f-92c8-88556941b8a3\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016862 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/371fefce-bb16-4c48-ac5a-01885e77c090-etc-machine-id\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016903 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-internal-tls-certs\") pod \"33bbda2a-fde6-466f-92c8-88556941b8a3\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016919 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmhn6\" (UniqueName: \"kubernetes.io/projected/33bbda2a-fde6-466f-92c8-88556941b8a3-kube-api-access-gmhn6\") pod \"33bbda2a-fde6-466f-92c8-88556941b8a3\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016940 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-scripts\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.016985 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-combined-ca-bundle\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017019 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-config\") pod \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017057 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data\") pod \"33bbda2a-fde6-466f-92c8-88556941b8a3\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017082 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-combined-ca-bundle\") pod \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017103 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-certs\") pod \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017145 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htgk6\" (UniqueName: \"kubernetes.io/projected/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-api-access-htgk6\") pod \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\" (UID: \"736df6ca-1308-4f87-a39e-7aca6ad4d5a1\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017166 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33bbda2a-fde6-466f-92c8-88556941b8a3-logs\") pod \"33bbda2a-fde6-466f-92c8-88556941b8a3\" (UID: \"33bbda2a-fde6-466f-92c8-88556941b8a3\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017198 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data-custom\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.017213 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-internal-tls-certs\") pod \"371fefce-bb16-4c48-ac5a-01885e77c090\" (UID: \"371fefce-bb16-4c48-ac5a-01885e77c090\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.019017 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/371fefce-bb16-4c48-ac5a-01885e77c090-logs" (OuterVolumeSpecName: "logs") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.030184 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bbda2a-fde6-466f-92c8-88556941b8a3-kube-api-access-gmhn6" (OuterVolumeSpecName: "kube-api-access-gmhn6") pod "33bbda2a-fde6-466f-92c8-88556941b8a3" (UID: "33bbda2a-fde6-466f-92c8-88556941b8a3"). InnerVolumeSpecName "kube-api-access-gmhn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.030268 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/371fefce-bb16-4c48-ac5a-01885e77c090-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.031990 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33bbda2a-fde6-466f-92c8-88556941b8a3" (UID: "33bbda2a-fde6-466f-92c8-88556941b8a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.032516 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371fefce-bb16-4c48-ac5a-01885e77c090-kube-api-access-nqpmr" (OuterVolumeSpecName: "kube-api-access-nqpmr") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "kube-api-access-nqpmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.033343 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33bbda2a-fde6-466f-92c8-88556941b8a3-logs" (OuterVolumeSpecName: "logs") pod "33bbda2a-fde6-466f-92c8-88556941b8a3" (UID: "33bbda2a-fde6-466f-92c8-88556941b8a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.034902 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-scripts" (OuterVolumeSpecName: "scripts") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.040987 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-api-access-htgk6" (OuterVolumeSpecName: "kube-api-access-htgk6") pod "736df6ca-1308-4f87-a39e-7aca6ad4d5a1" (UID: "736df6ca-1308-4f87-a39e-7aca6ad4d5a1"). InnerVolumeSpecName "kube-api-access-htgk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.043609 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120046 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-combined-ca-bundle\") pod \"84118502-58f0-48b2-b659-7f748311fa22\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120146 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bczg\" (UniqueName: \"kubernetes.io/projected/84118502-58f0-48b2-b659-7f748311fa22-kube-api-access-9bczg\") pod \"84118502-58f0-48b2-b659-7f748311fa22\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120181 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-config-data\") pod \"84118502-58f0-48b2-b659-7f748311fa22\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120211 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4099ef81-b3a1-4e17-af41-48813a488181-logs\") pod \"4099ef81-b3a1-4e17-af41-48813a488181\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120234 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhmtn\" (UniqueName: \"kubernetes.io/projected/4099ef81-b3a1-4e17-af41-48813a488181-kube-api-access-qhmtn\") pod \"4099ef81-b3a1-4e17-af41-48813a488181\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120288 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-internal-tls-certs\") pod \"84118502-58f0-48b2-b659-7f748311fa22\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120309 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-config-data\") pod \"4099ef81-b3a1-4e17-af41-48813a488181\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.120346 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-combined-ca-bundle\") pod \"4099ef81-b3a1-4e17-af41-48813a488181\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.121476 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84118502-58f0-48b2-b659-7f748311fa22-logs\") pod \"84118502-58f0-48b2-b659-7f748311fa22\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.121521 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86742085-590c-4ce5-b694-8a91a90c0b6f-operator-scripts\") pod \"86742085-590c-4ce5-b694-8a91a90c0b6f\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.121566 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-public-tls-certs\") pod \"84118502-58f0-48b2-b659-7f748311fa22\" (UID: \"84118502-58f0-48b2-b659-7f748311fa22\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.121597 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69zbh\" (UniqueName: \"kubernetes.io/projected/86742085-590c-4ce5-b694-8a91a90c0b6f-kube-api-access-69zbh\") pod \"86742085-590c-4ce5-b694-8a91a90c0b6f\" (UID: \"86742085-590c-4ce5-b694-8a91a90c0b6f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.121638 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-nova-metadata-tls-certs\") pod \"4099ef81-b3a1-4e17-af41-48813a488181\" (UID: \"4099ef81-b3a1-4e17-af41-48813a488181\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122083 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371fefce-bb16-4c48-ac5a-01885e77c090-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122099 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqpmr\" (UniqueName: \"kubernetes.io/projected/371fefce-bb16-4c48-ac5a-01885e77c090-kube-api-access-nqpmr\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122110 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122119 4890 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/371fefce-bb16-4c48-ac5a-01885e77c090-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122128 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmhn6\" (UniqueName: \"kubernetes.io/projected/33bbda2a-fde6-466f-92c8-88556941b8a3-kube-api-access-gmhn6\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122136 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122146 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htgk6\" (UniqueName: \"kubernetes.io/projected/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-api-access-htgk6\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122154 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33bbda2a-fde6-466f-92c8-88556941b8a3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.122162 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.124457 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84118502-58f0-48b2-b659-7f748311fa22-logs" (OuterVolumeSpecName: "logs") pod "84118502-58f0-48b2-b659-7f748311fa22" (UID: "84118502-58f0-48b2-b659-7f748311fa22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.125068 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4099ef81-b3a1-4e17-af41-48813a488181-logs" (OuterVolumeSpecName: "logs") pod "4099ef81-b3a1-4e17-af41-48813a488181" (UID: "4099ef81-b3a1-4e17-af41-48813a488181"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.126499 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86742085-590c-4ce5-b694-8a91a90c0b6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86742085-590c-4ce5-b694-8a91a90c0b6f" (UID: "86742085-590c-4ce5-b694-8a91a90c0b6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.128195 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84118502-58f0-48b2-b659-7f748311fa22-kube-api-access-9bczg" (OuterVolumeSpecName: "kube-api-access-9bczg") pod "84118502-58f0-48b2-b659-7f748311fa22" (UID: "84118502-58f0-48b2-b659-7f748311fa22"). InnerVolumeSpecName "kube-api-access-9bczg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.128921 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4099ef81-b3a1-4e17-af41-48813a488181-kube-api-access-qhmtn" (OuterVolumeSpecName: "kube-api-access-qhmtn") pod "4099ef81-b3a1-4e17-af41-48813a488181" (UID: "4099ef81-b3a1-4e17-af41-48813a488181"). InnerVolumeSpecName "kube-api-access-qhmtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.131407 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "736df6ca-1308-4f87-a39e-7aca6ad4d5a1" (UID: "736df6ca-1308-4f87-a39e-7aca6ad4d5a1"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.135660 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8whv" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.142085 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33bbda2a-fde6-466f-92c8-88556941b8a3" (UID: "33bbda2a-fde6-466f-92c8-88556941b8a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.163144 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "736df6ca-1308-4f87-a39e-7aca6ad4d5a1" (UID: "736df6ca-1308-4f87-a39e-7aca6ad4d5a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.165114 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86742085-590c-4ce5-b694-8a91a90c0b6f-kube-api-access-69zbh" (OuterVolumeSpecName: "kube-api-access-69zbh") pod "86742085-590c-4ce5-b694-8a91a90c0b6f" (UID: "86742085-590c-4ce5-b694-8a91a90c0b6f"). InnerVolumeSpecName "kube-api-access-69zbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.182287 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "736df6ca-1308-4f87-a39e-7aca6ad4d5a1" (UID: "736df6ca-1308-4f87-a39e-7aca6ad4d5a1"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.188624 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.192283 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.213470 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.218213 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84118502-58f0-48b2-b659-7f748311fa22" (UID: "84118502-58f0-48b2-b659-7f748311fa22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.221404 4890 generic.go:334] "Generic (PLEG): container finished" podID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerID="b1850cb5e39351073cf39f1d0e88018e7526c6b8091783f112754a2815cb88bf" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.221441 4890 generic.go:334] "Generic (PLEG): container finished" podID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerID="e6874597d1e13caa14de2a102072cb91ab0359d88ae4e3beb3a5adaa31d395bd" exitCode=2 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.221454 4890 generic.go:334] "Generic (PLEG): container finished" podID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerID="2660277560aad838dbebdfb2cd900cfc69db1d476e814c29ad6367cf3448c4ee" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.221507 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerDied","Data":"b1850cb5e39351073cf39f1d0e88018e7526c6b8091783f112754a2815cb88bf"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.221540 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerDied","Data":"e6874597d1e13caa14de2a102072cb91ab0359d88ae4e3beb3a5adaa31d395bd"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.221552 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerDied","Data":"2660277560aad838dbebdfb2cd900cfc69db1d476e814c29ad6367cf3448c4ee"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225101 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdc7k\" (UniqueName: \"kubernetes.io/projected/ef1ee1ae-c8ba-469c-ad49-896510b81e81-kube-api-access-bdc7k\") pod \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225164 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ee1ae-c8ba-469c-ad49-896510b81e81-operator-scripts\") pod \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\" (UID: \"ef1ee1ae-c8ba-469c-ad49-896510b81e81\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225613 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84118502-58f0-48b2-b659-7f748311fa22-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225632 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86742085-590c-4ce5-b694-8a91a90c0b6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225646 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225656 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69zbh\" (UniqueName: \"kubernetes.io/projected/86742085-590c-4ce5-b694-8a91a90c0b6f-kube-api-access-69zbh\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225670 4890 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225684 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225698 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225709 4890 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/736df6ca-1308-4f87-a39e-7aca6ad4d5a1-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225723 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bczg\" (UniqueName: \"kubernetes.io/projected/84118502-58f0-48b2-b659-7f748311fa22-kube-api-access-9bczg\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225734 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4099ef81-b3a1-4e17-af41-48813a488181-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225746 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhmtn\" (UniqueName: \"kubernetes.io/projected/4099ef81-b3a1-4e17-af41-48813a488181-kube-api-access-qhmtn\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225758 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225769 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.225780 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.228486 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1ee1ae-c8ba-469c-ad49-896510b81e81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef1ee1ae-c8ba-469c-ad49-896510b81e81" (UID: "ef1ee1ae-c8ba-469c-ad49-896510b81e81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.229063 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-config-data" (OuterVolumeSpecName: "config-data") pod "4099ef81-b3a1-4e17-af41-48813a488181" (UID: "4099ef81-b3a1-4e17-af41-48813a488181"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.229291 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "33bbda2a-fde6-466f-92c8-88556941b8a3" (UID: "33bbda2a-fde6-466f-92c8-88556941b8a3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.229908 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"697e1d3a-fab0-471b-bea8-43212f489fec","Type":"ContainerDied","Data":"215061d8efd4d431d66f95a42747cad2109fdd862c0bf59e3b7f07e5e4da7f48"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.229994 4890 scope.go:117] "RemoveContainer" containerID="9e0291aac0c698ccda6b3ca51011fe12c6a3dfe3353a4fd388da9648e8a82def" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.230120 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.231067 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1ee1ae-c8ba-469c-ad49-896510b81e81-kube-api-access-bdc7k" (OuterVolumeSpecName: "kube-api-access-bdc7k") pod "ef1ee1ae-c8ba-469c-ad49-896510b81e81" (UID: "ef1ee1ae-c8ba-469c-ad49-896510b81e81"). InnerVolumeSpecName "kube-api-access-bdc7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.241764 4890 generic.go:334] "Generic (PLEG): container finished" podID="4099ef81-b3a1-4e17-af41-48813a488181" containerID="670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.241790 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "33bbda2a-fde6-466f-92c8-88556941b8a3" (UID: "33bbda2a-fde6-466f-92c8-88556941b8a3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.241898 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.241912 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4099ef81-b3a1-4e17-af41-48813a488181","Type":"ContainerDied","Data":"670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.242570 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4099ef81-b3a1-4e17-af41-48813a488181","Type":"ContainerDied","Data":"13868da3015f9a8897730019f67e00a5c1630ade212710d3d8f41d620b26efa5"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.249806 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.249929 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e775a69e-619f-4920-8fc9-6d216e400c0e","Type":"ContainerDied","Data":"bcca4b36076ab261210e22493a6b04ff5095992300367c852b98b6aa5f867e4c"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.251786 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "84118502-58f0-48b2-b659-7f748311fa22" (UID: "84118502-58f0-48b2-b659-7f748311fa22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.253678 4890 generic.go:334] "Generic (PLEG): container finished" podID="770a4f11-b2a3-46fd-a06d-3af27edd3d9f" containerID="58a6039e13c8c21e15265060210fffee78adef95926968b02f31c6424dc6e4e2" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.253753 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"770a4f11-b2a3-46fd-a06d-3af27edd3d9f","Type":"ContainerDied","Data":"58a6039e13c8c21e15265060210fffee78adef95926968b02f31c6424dc6e4e2"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.253780 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"770a4f11-b2a3-46fd-a06d-3af27edd3d9f","Type":"ContainerDied","Data":"13309ead34de9ffa9030777afa7d73bcbd20e6dd5d99e7fd9bac7506ced9f198"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.253792 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13309ead34de9ffa9030777afa7d73bcbd20e6dd5d99e7fd9bac7506ced9f198" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.256313 4890 generic.go:334] "Generic (PLEG): container finished" podID="50c99515-8e62-4e54-9ffc-e9294db2dc4f" containerID="90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.256445 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50c99515-8e62-4e54-9ffc-e9294db2dc4f","Type":"ContainerDied","Data":"90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.261188 4890 generic.go:334] "Generic (PLEG): container finished" podID="736df6ca-1308-4f87-a39e-7aca6ad4d5a1" containerID="612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54" exitCode=2 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.261259 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"736df6ca-1308-4f87-a39e-7aca6ad4d5a1","Type":"ContainerDied","Data":"612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.261287 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"736df6ca-1308-4f87-a39e-7aca6ad4d5a1","Type":"ContainerDied","Data":"06b6392576ce5b1b073fa85b31665f25e5e5344c3929f976645fa39a084f41c0"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.261318 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.268128 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-config-data" (OuterVolumeSpecName: "config-data") pod "84118502-58f0-48b2-b659-7f748311fa22" (UID: "84118502-58f0-48b2-b659-7f748311fa22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.269789 4890 generic.go:334] "Generic (PLEG): container finished" podID="2780ff06-b30a-43e8-97d5-b9477d2713d6" containerID="d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.269855 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2780ff06-b30a-43e8-97d5-b9477d2713d6","Type":"ContainerDied","Data":"d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.272122 4890 generic.go:334] "Generic (PLEG): container finished" podID="84118502-58f0-48b2-b659-7f748311fa22" containerID="f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.272219 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84118502-58f0-48b2-b659-7f748311fa22","Type":"ContainerDied","Data":"f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.272286 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84118502-58f0-48b2-b659-7f748311fa22","Type":"ContainerDied","Data":"99a71dff937055c422e54e691ad75297d2914d5942943363dc080847f72bdd3b"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.272395 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.276399 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p8whv" event={"ID":"ef1ee1ae-c8ba-469c-ad49-896510b81e81","Type":"ContainerDied","Data":"cbfaafd60396a31b01efc8d8526cdb1f72e3b5960589d2afa866b3631242e11b"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.276601 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8whv" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.281180 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "84118502-58f0-48b2-b659-7f748311fa22" (UID: "84118502-58f0-48b2-b659-7f748311fa22"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.283495 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-de17-account-create-update-crzrr" event={"ID":"86742085-590c-4ce5-b694-8a91a90c0b6f","Type":"ContainerDied","Data":"75d5bd667ec839ed05c5c7f5e39e1a3ad37b1cce4bbf9250306b0f23226f3286"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.283657 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-de17-account-create-update-crzrr" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.285413 4890 generic.go:334] "Generic (PLEG): container finished" podID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerID="9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.285489 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6946c9f5b4-2l82t" event={"ID":"33bbda2a-fde6-466f-92c8-88556941b8a3","Type":"ContainerDied","Data":"9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.285516 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6946c9f5b4-2l82t" event={"ID":"33bbda2a-fde6-466f-92c8-88556941b8a3","Type":"ContainerDied","Data":"b43159309c1ae62ac35c149df1b31a33a3fb89155628ce87bc30f11a30db813b"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.285589 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6946c9f5b4-2l82t" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.287489 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5z4qn" event={"ID":"b402af9c-655e-4cd8-91a4-f9ff4f8ef671","Type":"ContainerStarted","Data":"e8ea9add3c2b9995d32dc12e3136a8383f758efa51a2c9ece0a3e2cad5a26755"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.288100 4890 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-5z4qn" secret="" err="secret \"galera-openstack-dockercfg-hsd58\" not found" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.291965 4890 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:57:03 crc kubenswrapper[4890]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 15:57:03 crc kubenswrapper[4890]: Jan 21 15:57:03 crc kubenswrapper[4890]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 15:57:03 crc kubenswrapper[4890]: Jan 21 15:57:03 crc kubenswrapper[4890]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 15:57:03 crc kubenswrapper[4890]: Jan 21 15:57:03 crc kubenswrapper[4890]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 15:57:03 crc kubenswrapper[4890]: Jan 21 15:57:03 crc kubenswrapper[4890]: if [ -n "" ]; then Jan 21 15:57:03 crc kubenswrapper[4890]: GRANT_DATABASE="" Jan 21 15:57:03 crc kubenswrapper[4890]: else Jan 21 15:57:03 crc kubenswrapper[4890]: GRANT_DATABASE="*" Jan 21 15:57:03 crc kubenswrapper[4890]: fi Jan 21 15:57:03 crc kubenswrapper[4890]: Jan 21 15:57:03 crc kubenswrapper[4890]: # going for maximum compatibility here: Jan 21 15:57:03 crc kubenswrapper[4890]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 15:57:03 crc kubenswrapper[4890]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 15:57:03 crc kubenswrapper[4890]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 15:57:03 crc kubenswrapper[4890]: # support updates Jan 21 15:57:03 crc kubenswrapper[4890]: Jan 21 15:57:03 crc kubenswrapper[4890]: $MYSQL_CMD < logger="UnhandledError" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.293089 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-5z4qn" podUID="b402af9c-655e-4cd8-91a4-f9ff4f8ef671" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.295564 4890 generic.go:334] "Generic (PLEG): container finished" podID="371fefce-bb16-4c48-ac5a-01885e77c090" containerID="7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f" exitCode=0 Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.295656 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"371fefce-bb16-4c48-ac5a-01885e77c090","Type":"ContainerDied","Data":"7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.295681 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"371fefce-bb16-4c48-ac5a-01885e77c090","Type":"ContainerDied","Data":"678cb303b1c60ecec61a6716ae72881f7f9503ea5e7f72f36da5431eda4ea2c7"} Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.295772 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.299861 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.320609 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data" (OuterVolumeSpecName: "config-data") pod "33bbda2a-fde6-466f-92c8-88556941b8a3" (UID: "33bbda2a-fde6-466f-92c8-88556941b8a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.320608 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4099ef81-b3a1-4e17-af41-48813a488181" (UID: "4099ef81-b3a1-4e17-af41-48813a488181"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.321831 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data" (OuterVolumeSpecName: "config-data") pod "371fefce-bb16-4c48-ac5a-01885e77c090" (UID: "371fefce-bb16-4c48-ac5a-01885e77c090"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328493 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328518 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328531 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328540 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328548 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84118502-58f0-48b2-b659-7f748311fa22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328557 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328568 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdc7k\" (UniqueName: \"kubernetes.io/projected/ef1ee1ae-c8ba-469c-ad49-896510b81e81-kube-api-access-bdc7k\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328588 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328598 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371fefce-bb16-4c48-ac5a-01885e77c090-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328609 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1ee1ae-c8ba-469c-ad49-896510b81e81-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.328622 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbda2a-fde6-466f-92c8-88556941b8a3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.386877 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4099ef81-b3a1-4e17-af41-48813a488181" (UID: "4099ef81-b3a1-4e17-af41-48813a488181"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.430638 4890 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4099ef81-b3a1-4e17-af41-48813a488181-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.430981 4890 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.431032 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts podName:b402af9c-655e-4cd8-91a4-f9ff4f8ef671 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:03.931014513 +0000 UTC m=+1506.292456922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts") pod "root-account-create-update-5z4qn" (UID: "b402af9c-655e-4cd8-91a4-f9ff4f8ef671") : configmap "openstack-scripts" not found Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.444707 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.455471 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.467325 4890 scope.go:117] "RemoveContainer" containerID="a0f8f3b3b110e555d59db6b93fc91f9b56e10fd7253b81778b2e41c868e02c8a" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.514420 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.518721 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.520659 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.522563 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.522638 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerName="galera" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.524767 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.530767 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.534936 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-config-data\") pod \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.535003 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-memcached-tls-certs\") pod \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.535095 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9jmm\" (UniqueName: \"kubernetes.io/projected/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kube-api-access-q9jmm\") pod \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.535189 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kolla-config\") pod \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.535790 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-combined-ca-bundle\") pod \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\" (UID: \"770a4f11-b2a3-46fd-a06d-3af27edd3d9f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.536774 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.539114 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-config-data" (OuterVolumeSpecName: "config-data") pod "770a4f11-b2a3-46fd-a06d-3af27edd3d9f" (UID: "770a4f11-b2a3-46fd-a06d-3af27edd3d9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.539482 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "770a4f11-b2a3-46fd-a06d-3af27edd3d9f" (UID: "770a4f11-b2a3-46fd-a06d-3af27edd3d9f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.543662 4890 scope.go:117] "RemoveContainer" containerID="670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.544837 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.546985 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kube-api-access-q9jmm" (OuterVolumeSpecName: "kube-api-access-q9jmm") pod "770a4f11-b2a3-46fd-a06d-3af27edd3d9f" (UID: "770a4f11-b2a3-46fd-a06d-3af27edd3d9f"). InnerVolumeSpecName "kube-api-access-q9jmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.551592 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.552300 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.559100 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.581459 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "770a4f11-b2a3-46fd-a06d-3af27edd3d9f" (UID: "770a4f11-b2a3-46fd-a06d-3af27edd3d9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.588214 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p8whv"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.596548 4890 scope.go:117] "RemoveContainer" containerID="6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.597167 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p8whv"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.604736 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "770a4f11-b2a3-46fd-a06d-3af27edd3d9f" (UID: "770a4f11-b2a3-46fd-a06d-3af27edd3d9f"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.614731 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-de17-account-create-update-crzrr"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.622208 4890 scope.go:117] "RemoveContainer" containerID="670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.622954 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f\": container with ID starting with 670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f not found: ID does not exist" containerID="670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.622979 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f"} err="failed to get container status \"670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f\": rpc error: code = NotFound desc = could not find container \"670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f\": container with ID starting with 670db3d06c5a2ffa51f33eca9423b09d5084ba53db817cd6ac3f4a57529a332f not found: ID does not exist" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.623001 4890 scope.go:117] "RemoveContainer" containerID="6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.623533 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791\": container with ID starting with 6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791 not found: ID does not exist" containerID="6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.623580 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791"} err="failed to get container status \"6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791\": rpc error: code = NotFound desc = could not find container \"6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791\": container with ID starting with 6a648fe355b6be26dd32a97f351e17fdd8c6cce1d28774b0a9c1eb2eef2a0791 not found: ID does not exist" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.623612 4890 scope.go:117] "RemoveContainer" containerID="449855515a900befe1127318232d23ee1ce08ab1fc81e724dd3ee85e1bdccca0" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.641189 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-de17-account-create-update-crzrr"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.641881 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-config-data\") pod \"2780ff06-b30a-43e8-97d5-b9477d2713d6\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.641913 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-combined-ca-bundle\") pod \"2780ff06-b30a-43e8-97d5-b9477d2713d6\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.641981 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-combined-ca-bundle\") pod \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.642104 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-config-data\") pod \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.642948 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h982\" (UniqueName: \"kubernetes.io/projected/50c99515-8e62-4e54-9ffc-e9294db2dc4f-kube-api-access-8h982\") pod \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\" (UID: \"50c99515-8e62-4e54-9ffc-e9294db2dc4f\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.643134 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqtn5\" (UniqueName: \"kubernetes.io/projected/2780ff06-b30a-43e8-97d5-b9477d2713d6-kube-api-access-cqtn5\") pod \"2780ff06-b30a-43e8-97d5-b9477d2713d6\" (UID: \"2780ff06-b30a-43e8-97d5-b9477d2713d6\") " Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.643653 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.643689 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.643700 4890 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.643712 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9jmm\" (UniqueName: \"kubernetes.io/projected/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kube-api-access-q9jmm\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.643722 4890 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/770a4f11-b2a3-46fd-a06d-3af27edd3d9f-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.651855 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2780ff06-b30a-43e8-97d5-b9477d2713d6-kube-api-access-cqtn5" (OuterVolumeSpecName: "kube-api-access-cqtn5") pod "2780ff06-b30a-43e8-97d5-b9477d2713d6" (UID: "2780ff06-b30a-43e8-97d5-b9477d2713d6"). InnerVolumeSpecName "kube-api-access-cqtn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.657082 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c99515-8e62-4e54-9ffc-e9294db2dc4f-kube-api-access-8h982" (OuterVolumeSpecName: "kube-api-access-8h982") pod "50c99515-8e62-4e54-9ffc-e9294db2dc4f" (UID: "50c99515-8e62-4e54-9ffc-e9294db2dc4f"). InnerVolumeSpecName "kube-api-access-8h982". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.664687 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.674786 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-config-data" (OuterVolumeSpecName: "config-data") pod "2780ff06-b30a-43e8-97d5-b9477d2713d6" (UID: "2780ff06-b30a-43e8-97d5-b9477d2713d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.675750 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.682767 4890 scope.go:117] "RemoveContainer" containerID="1ca3498c72178f6185568c6444f79a4b05e9c4a827b67e2ab8184900041c243b" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.686558 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50c99515-8e62-4e54-9ffc-e9294db2dc4f" (UID: "50c99515-8e62-4e54-9ffc-e9294db2dc4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.690019 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6946c9f5b4-2l82t"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.697912 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-config-data" (OuterVolumeSpecName: "config-data") pod "50c99515-8e62-4e54-9ffc-e9294db2dc4f" (UID: "50c99515-8e62-4e54-9ffc-e9294db2dc4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.701923 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6946c9f5b4-2l82t"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.702108 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2780ff06-b30a-43e8-97d5-b9477d2713d6" (UID: "2780ff06-b30a-43e8-97d5-b9477d2713d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.709665 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.726078 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.736288 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.745092 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.745120 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c99515-8e62-4e54-9ffc-e9294db2dc4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.745130 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h982\" (UniqueName: \"kubernetes.io/projected/50c99515-8e62-4e54-9ffc-e9294db2dc4f-kube-api-access-8h982\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.745141 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqtn5\" (UniqueName: \"kubernetes.io/projected/2780ff06-b30a-43e8-97d5-b9477d2713d6-kube-api-access-cqtn5\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.745150 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.745159 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2780ff06-b30a-43e8-97d5-b9477d2713d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.747230 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.782969 4890 scope.go:117] "RemoveContainer" containerID="612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.823494 4890 scope.go:117] "RemoveContainer" containerID="612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.824363 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54\": container with ID starting with 612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54 not found: ID does not exist" containerID="612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.824402 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54"} err="failed to get container status \"612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54\": rpc error: code = NotFound desc = could not find container \"612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54\": container with ID starting with 612cc5859d17687ea1231861d27c05e20020161938907529791ddb1ee1a5ff54 not found: ID does not exist" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.824431 4890 scope.go:117] "RemoveContainer" containerID="f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.859742 4890 scope.go:117] "RemoveContainer" containerID="60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.886064 4890 scope.go:117] "RemoveContainer" containerID="f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.886536 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9\": container with ID starting with f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9 not found: ID does not exist" containerID="f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.886593 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9"} err="failed to get container status \"f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9\": rpc error: code = NotFound desc = could not find container \"f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9\": container with ID starting with f96231cbb9a5f1cc7fbecdc64e8b3a65b0069cbd1a310a5baeddb6be8629c3d9 not found: ID does not exist" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.886629 4890 scope.go:117] "RemoveContainer" containerID="60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.887536 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d\": container with ID starting with 60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d not found: ID does not exist" containerID="60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.887571 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d"} err="failed to get container status \"60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d\": rpc error: code = NotFound desc = could not find container \"60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d\": container with ID starting with 60f3ed8a676f7e7949cf80a3dbe51c3db78e1d64c54b2b4327a767c24e11fe9d not found: ID does not exist" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.887592 4890 scope.go:117] "RemoveContainer" containerID="9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.918552 4890 scope.go:117] "RemoveContainer" containerID="6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.925367 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944" path="/var/lib/kubelet/pods/0fbfebc6-e25e-47f6-97d1-dd9d8c0cd944/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.925900 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1463d4e1-9ed2-4f45-b473-a94d18a4156f" path="/var/lib/kubelet/pods/1463d4e1-9ed2-4f45-b473-a94d18a4156f/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.926932 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a25c82c-f72c-4ecb-a760-a568761bd5f2" path="/var/lib/kubelet/pods/2a25c82c-f72c-4ecb-a760-a568761bd5f2/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.927898 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" path="/var/lib/kubelet/pods/33bbda2a-fde6-466f-92c8-88556941b8a3/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.929639 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" path="/var/lib/kubelet/pods/371fefce-bb16-4c48-ac5a-01885e77c090/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.930582 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4099ef81-b3a1-4e17-af41-48813a488181" path="/var/lib/kubelet/pods/4099ef81-b3a1-4e17-af41-48813a488181/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.931477 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" path="/var/lib/kubelet/pods/697e1d3a-fab0-471b-bea8-43212f489fec/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.932611 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736df6ca-1308-4f87-a39e-7aca6ad4d5a1" path="/var/lib/kubelet/pods/736df6ca-1308-4f87-a39e-7aca6ad4d5a1/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.933655 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84118502-58f0-48b2-b659-7f748311fa22" path="/var/lib/kubelet/pods/84118502-58f0-48b2-b659-7f748311fa22/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.934185 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86742085-590c-4ce5-b694-8a91a90c0b6f" path="/var/lib/kubelet/pods/86742085-590c-4ce5-b694-8a91a90c0b6f/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.934721 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b03ee9-0cb8-49eb-b3da-3d1c42e15720" path="/var/lib/kubelet/pods/91b03ee9-0cb8-49eb-b3da-3d1c42e15720/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.935880 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd2d089-a1a5-4e25-920a-a485d0fd319f" path="/var/lib/kubelet/pods/cdd2d089-a1a5-4e25-920a-a485d0fd319f/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.936576 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d009f76d-bc65-453c-a05f-29454314ab7a" path="/var/lib/kubelet/pods/d009f76d-bc65-453c-a05f-29454314ab7a/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.937181 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3" path="/var/lib/kubelet/pods/da5d1d48-6be1-4d0a-b341-dca4c1d6e4a3/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.938424 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" path="/var/lib/kubelet/pods/e775a69e-619f-4920-8fc9-6d216e400c0e/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.939047 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef076f7d-7b53-4a05-8208-8dfa2ee2d415" path="/var/lib/kubelet/pods/ef076f7d-7b53-4a05-8208-8dfa2ee2d415/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.940608 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1ee1ae-c8ba-469c-ad49-896510b81e81" path="/var/lib/kubelet/pods/ef1ee1ae-c8ba-469c-ad49-896510b81e81/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.941007 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ca330d-0795-4c1d-8a5e-12df75f280ba" path="/var/lib/kubelet/pods/f3ca330d-0795-4c1d-8a5e-12df75f280ba/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.941909 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa88b847-d54b-4e99-8dee-39c83f0a06d8" path="/var/lib/kubelet/pods/fa88b847-d54b-4e99-8dee-39c83f0a06d8/volumes" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.947586 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tq9\" (UniqueName: \"kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.947698 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts\") pod \"keystone-358a-account-create-update-gzppl\" (UID: \"abe082f6-090f-4887-ab4a-cee13a8ad2a2\") " pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.947868 4890 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.947928 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts podName:b402af9c-655e-4cd8-91a4-f9ff4f8ef671 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:04.947910345 +0000 UTC m=+1507.309352754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts") pod "root-account-create-update-5z4qn" (UID: "b402af9c-655e-4cd8-91a4-f9ff4f8ef671") : configmap "openstack-scripts" not found Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.948547 4890 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.948583 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts podName:abe082f6-090f-4887-ab4a-cee13a8ad2a2 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:05.948572882 +0000 UTC m=+1508.310015291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts") pod "keystone-358a-account-create-update-gzppl" (UID: "abe082f6-090f-4887-ab4a-cee13a8ad2a2") : configmap "openstack-scripts" not found Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.950970 4890 scope.go:117] "RemoveContainer" containerID="9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.951896 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091\": container with ID starting with 9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091 not found: ID does not exist" containerID="9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.951945 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091"} err="failed to get container status \"9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091\": rpc error: code = NotFound desc = could not find container \"9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091\": container with ID starting with 9b621db57e99eaae7098f79ef3ba31f35408a09d036ff915da05642bf79a5091 not found: ID does not exist" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.951975 4890 scope.go:117] "RemoveContainer" containerID="6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333" Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.952331 4890 projected.go:194] Error preparing data for projected volume kube-api-access-n2tq9 for pod openstack/keystone-358a-account-create-update-gzppl: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.952406 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9 podName:abe082f6-090f-4887-ab4a-cee13a8ad2a2 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:05.952388197 +0000 UTC m=+1508.313830606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n2tq9" (UniqueName: "kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9") pod "keystone-358a-account-create-update-gzppl" (UID: "abe082f6-090f-4887-ab4a-cee13a8ad2a2") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 15:57:03 crc kubenswrapper[4890]: E0121 15:57:03.952466 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333\": container with ID starting with 6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333 not found: ID does not exist" containerID="6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.952486 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333"} err="failed to get container status \"6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333\": rpc error: code = NotFound desc = could not find container \"6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333\": container with ID starting with 6ded81ce47fe0d371d55567f4ffdf2dfe89c4f5c119ff633f1911fded1dff333 not found: ID does not exist" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.952499 4890 scope.go:117] "RemoveContainer" containerID="7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f" Jan 21 15:57:03 crc kubenswrapper[4890]: I0121 15:57:03.998604 4890 scope.go:117] "RemoveContainer" containerID="766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.032213 4890 scope.go:117] "RemoveContainer" containerID="7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f" Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.034528 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f\": container with ID starting with 7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f not found: ID does not exist" containerID="7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.034582 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f"} err="failed to get container status \"7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f\": rpc error: code = NotFound desc = could not find container \"7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f\": container with ID starting with 7fe324bb64d7a8839007e954f58321ff1fbc5d2d58147da0502d9c095c34d88f not found: ID does not exist" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.034620 4890 scope.go:117] "RemoveContainer" containerID="766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7" Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.034934 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7\": container with ID starting with 766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7 not found: ID does not exist" containerID="766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.034958 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7"} err="failed to get container status \"766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7\": rpc error: code = NotFound desc = could not find container \"766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7\": container with ID starting with 766ead7cbfd13a7259f0df3af6f041ae80acbefb42ecbbd8e2941e3d36799be7 not found: ID does not exist" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.107119 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_332f4b6c-7fea-4dae-bb46-3c35ee84ba25/ovn-northd/0.log" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.107453 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.251748 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-northd-tls-certs\") pod \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.251822 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7bnf\" (UniqueName: \"kubernetes.io/projected/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-kube-api-access-c7bnf\") pod \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.251896 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-rundir\") pod \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.251936 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-config\") pod \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.251992 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-metrics-certs-tls-certs\") pod \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.252009 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-combined-ca-bundle\") pod \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.252063 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-scripts\") pod \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\" (UID: \"332f4b6c-7fea-4dae-bb46-3c35ee84ba25\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.252489 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "332f4b6c-7fea-4dae-bb46-3c35ee84ba25" (UID: "332f4b6c-7fea-4dae-bb46-3c35ee84ba25"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.252760 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-config" (OuterVolumeSpecName: "config") pod "332f4b6c-7fea-4dae-bb46-3c35ee84ba25" (UID: "332f4b6c-7fea-4dae-bb46-3c35ee84ba25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.252769 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-scripts" (OuterVolumeSpecName: "scripts") pod "332f4b6c-7fea-4dae-bb46-3c35ee84ba25" (UID: "332f4b6c-7fea-4dae-bb46-3c35ee84ba25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.258563 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-kube-api-access-c7bnf" (OuterVolumeSpecName: "kube-api-access-c7bnf") pod "332f4b6c-7fea-4dae-bb46-3c35ee84ba25" (UID: "332f4b6c-7fea-4dae-bb46-3c35ee84ba25"). InnerVolumeSpecName "kube-api-access-c7bnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.293003 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "332f4b6c-7fea-4dae-bb46-3c35ee84ba25" (UID: "332f4b6c-7fea-4dae-bb46-3c35ee84ba25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.324580 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "332f4b6c-7fea-4dae-bb46-3c35ee84ba25" (UID: "332f4b6c-7fea-4dae-bb46-3c35ee84ba25"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.328229 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "332f4b6c-7fea-4dae-bb46-3c35ee84ba25" (UID: "332f4b6c-7fea-4dae-bb46-3c35ee84ba25"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.333326 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"50c99515-8e62-4e54-9ffc-e9294db2dc4f","Type":"ContainerDied","Data":"000aae2b2f59e6d8873edd49f49e70ab03a73a68859e8f9d5a80519c1e410d65"} Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.333381 4890 scope.go:117] "RemoveContainer" containerID="90c4bbf1045b59f3d9d7a5a972e1e7c1bd6ef82ab223b6629c444ca53ba402d4" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.333464 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.339464 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_332f4b6c-7fea-4dae-bb46-3c35ee84ba25/ovn-northd/0.log" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.339511 4890 generic.go:334] "Generic (PLEG): container finished" podID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerID="e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" exitCode=139 Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.339565 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"332f4b6c-7fea-4dae-bb46-3c35ee84ba25","Type":"ContainerDied","Data":"e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0"} Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.339593 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"332f4b6c-7fea-4dae-bb46-3c35ee84ba25","Type":"ContainerDied","Data":"547edafaa7999af09852c07995b5137528252aa181ddc52eb350cd445c381aee"} Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.339662 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.344984 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2780ff06-b30a-43e8-97d5-b9477d2713d6","Type":"ContainerDied","Data":"8b7810bcc8d4db291b73ea21d4da00a3f230594cd0b1cf2f32cec5750bb3de18"} Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.345248 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.350719 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.351743 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358a-account-create-update-gzppl" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.353261 4890 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.353281 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7bnf\" (UniqueName: \"kubernetes.io/projected/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-kube-api-access-c7bnf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.353295 4890 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.353307 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.353321 4890 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.353332 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.353342 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332f4b6c-7fea-4dae-bb46-3c35ee84ba25-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.353417 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.353465 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data podName:9bb9aa52-0895-418e-8e0b-d922948e85a7 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:12.353449737 +0000 UTC m=+1514.714892146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7") : configmap "rabbitmq-cell1-config-data" not found Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.400132 4890 scope.go:117] "RemoveContainer" containerID="abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052" Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.434334 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.437082 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.439414 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.447025 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="477ba084-e185-42c6-a0ae-f5de448a4d13" containerName="nova-cell1-conductor-conductor" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.477704 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-358a-account-create-update-gzppl"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.500009 4890 scope.go:117] "RemoveContainer" containerID="e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.503377 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-358a-account-create-update-gzppl"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.509450 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.515246 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.519889 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.524722 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.528942 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.533933 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.540248 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.544990 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.581916 4890 scope.go:117] "RemoveContainer" containerID="abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052" Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.583036 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052\": container with ID starting with abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052 not found: ID does not exist" containerID="abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.583086 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052"} err="failed to get container status \"abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052\": rpc error: code = NotFound desc = could not find container \"abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052\": container with ID starting with abe624741eafe3f184d21d5aaf34939119fbff7a2c2ff8bec03c3e56df4d1052 not found: ID does not exist" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.583121 4890 scope.go:117] "RemoveContainer" containerID="e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.583834 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0\": container with ID starting with e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0 not found: ID does not exist" containerID="e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.583877 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0"} err="failed to get container status \"e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0\": rpc error: code = NotFound desc = could not find container \"e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0\": container with ID starting with e68cb6e7cee1aced1eb43d561d3f92a8b64747a5c564e0f1e1e6fb5fb526c9e0 not found: ID does not exist" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.583905 4890 scope.go:117] "RemoveContainer" containerID="d4a5a52d2c5dbc8140605411d1d6694c13a149e34211ff2de1edf57e55a03b12" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.665666 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2tq9\" (UniqueName: \"kubernetes.io/projected/abe082f6-090f-4887-ab4a-cee13a8ad2a2-kube-api-access-n2tq9\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.665703 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe082f6-090f-4887-ab4a-cee13a8ad2a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.665779 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 15:57:04 crc kubenswrapper[4890]: E0121 15:57:04.665837 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data podName:caae7093-b594-47fb-b863-38d825f0048d nodeName:}" failed. No retries permitted until 2026-01-21 15:57:12.66581727 +0000 UTC m=+1515.027259679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data") pod "rabbitmq-server-0" (UID: "caae7093-b594-47fb-b863-38d825f0048d") : configmap "rabbitmq-config-data" not found Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.694132 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ddd577785-xdkgv" podUID="212a7372-7b31-40f6-bef8-fc76925be961" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: i/o timeout" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.698657 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.766640 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqf84\" (UniqueName: \"kubernetes.io/projected/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-kube-api-access-dqf84\") pod \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.767146 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts\") pod \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\" (UID: \"b402af9c-655e-4cd8-91a4-f9ff4f8ef671\") " Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.767982 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b402af9c-655e-4cd8-91a4-f9ff4f8ef671" (UID: "b402af9c-655e-4cd8-91a4-f9ff4f8ef671"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.770318 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-kube-api-access-dqf84" (OuterVolumeSpecName: "kube-api-access-dqf84") pod "b402af9c-655e-4cd8-91a4-f9ff4f8ef671" (UID: "b402af9c-655e-4cd8-91a4-f9ff4f8ef671"). InnerVolumeSpecName "kube-api-access-dqf84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.870118 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:04 crc kubenswrapper[4890]: I0121 15:57:04.870159 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqf84\" (UniqueName: \"kubernetes.io/projected/b402af9c-655e-4cd8-91a4-f9ff4f8ef671-kube-api-access-dqf84\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.046182 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174121 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-operator-scripts\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174191 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kolla-config\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174240 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqfp6\" (UniqueName: \"kubernetes.io/projected/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kube-api-access-gqfp6\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174299 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174387 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-generated\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174418 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-combined-ca-bundle\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174488 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-default\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.174562 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-galera-tls-certs\") pod \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\" (UID: \"cc7a8eb5-11e0-4603-b80a-3b4f6e724770\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.179226 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.179291 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.179331 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.179943 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.185983 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kube-api-access-gqfp6" (OuterVolumeSpecName: "kube-api-access-gqfp6") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "kube-api-access-gqfp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.186879 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.196756 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.243880 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "cc7a8eb5-11e0-4603-b80a-3b4f6e724770" (UID: "cc7a8eb5-11e0-4603-b80a-3b4f6e724770"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.274816 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.275250 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.275479 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.275506 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.276095 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.277189 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.278916 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.278943 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292031 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292080 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292101 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292121 4890 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292140 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292163 4890 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292188 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqfp6\" (UniqueName: \"kubernetes.io/projected/cc7a8eb5-11e0-4603-b80a-3b4f6e724770-kube-api-access-gqfp6\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.292236 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.320841 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.320958 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.367060 4890 generic.go:334] "Generic (PLEG): container finished" podID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerID="eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" exitCode=0 Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.367208 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cc7a8eb5-11e0-4603-b80a-3b4f6e724770","Type":"ContainerDied","Data":"eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341"} Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.367515 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cc7a8eb5-11e0-4603-b80a-3b4f6e724770","Type":"ContainerDied","Data":"b99c38f26ca2e62d9f5bb7864e021dad61fbc01aeb3cca6b4c9de2f33837adac"} Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.367641 4890 scope.go:117] "RemoveContainer" containerID="eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.367292 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.405865 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-confd\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.405972 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caae7093-b594-47fb-b863-38d825f0048d-erlang-cookie-secret\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406011 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rpcj\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-kube-api-access-5rpcj\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406064 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-tls\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406092 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caae7093-b594-47fb-b863-38d825f0048d-pod-info\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406141 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-plugins-conf\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406157 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406187 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-server-conf\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406211 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406256 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-erlang-cookie\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406274 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-plugins\") pod \"caae7093-b594-47fb-b863-38d825f0048d\" (UID: \"caae7093-b594-47fb-b863-38d825f0048d\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.406579 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.407158 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.411520 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.412318 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.413233 4890 generic.go:334] "Generic (PLEG): container finished" podID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerID="489037191e7d74a2730eac1c46abc09d34fce2781e436638fcd47291281cfd30" exitCode=0 Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.413476 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9bb9aa52-0895-418e-8e0b-d922948e85a7","Type":"ContainerDied","Data":"489037191e7d74a2730eac1c46abc09d34fce2781e436638fcd47291281cfd30"} Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.413538 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9bb9aa52-0895-418e-8e0b-d922948e85a7","Type":"ContainerDied","Data":"12fe1f941cc500ce6c437416684b64243a759e1f17e92c4153cbf5a6bc326057"} Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.413548 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12fe1f941cc500ce6c437416684b64243a759e1f17e92c4153cbf5a6bc326057" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.414715 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/caae7093-b594-47fb-b863-38d825f0048d-pod-info" (OuterVolumeSpecName: "pod-info") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.415071 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caae7093-b594-47fb-b863-38d825f0048d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.415522 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.422664 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.428244 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-kube-api-access-5rpcj" (OuterVolumeSpecName: "kube-api-access-5rpcj") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "kube-api-access-5rpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.433260 4890 generic.go:334] "Generic (PLEG): container finished" podID="caae7093-b594-47fb-b863-38d825f0048d" containerID="ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2" exitCode=0 Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.433406 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caae7093-b594-47fb-b863-38d825f0048d","Type":"ContainerDied","Data":"ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2"} Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.434623 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caae7093-b594-47fb-b863-38d825f0048d","Type":"ContainerDied","Data":"f245e1fe5f5f6bde901fa2a6994facf2644276f4ebfc8d6b57ea64380d885c7a"} Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.433475 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.451130 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5z4qn" event={"ID":"b402af9c-655e-4cd8-91a4-f9ff4f8ef671","Type":"ContainerDied","Data":"e8ea9add3c2b9995d32dc12e3136a8383f758efa51a2c9ece0a3e2cad5a26755"} Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.451195 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5z4qn" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.455050 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.482673 4890 scope.go:117] "RemoveContainer" containerID="2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.483179 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data" (OuterVolumeSpecName: "config-data") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.483637 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-server-conf" (OuterVolumeSpecName: "server-conf") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.495691 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.505405 4890 scope.go:117] "RemoveContainer" containerID="eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510030 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510273 4890 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510287 4890 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510297 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caae7093-b594-47fb-b863-38d825f0048d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510309 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510319 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510328 4890 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caae7093-b594-47fb-b863-38d825f0048d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510339 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rpcj\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-kube-api-access-5rpcj\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510369 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.510380 4890 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caae7093-b594-47fb-b863-38d825f0048d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.513307 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.513553 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341\": container with ID starting with eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341 not found: ID does not exist" containerID="eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.516582 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341"} err="failed to get container status \"eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341\": rpc error: code = NotFound desc = could not find container \"eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341\": container with ID starting with eeb2917de0788abb4c2899b4290831bab68896a99fc093135226a5654ce03341 not found: ID does not exist" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.516636 4890 scope.go:117] "RemoveContainer" containerID="2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd" Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.517103 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd\": container with ID starting with 2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd not found: ID does not exist" containerID="2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.517132 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd"} err="failed to get container status \"2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd\": rpc error: code = NotFound desc = could not find container \"2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd\": container with ID starting with 2542dc356509d51811c103dc7e8d243ad8a40c04a3b993b20c35a1e3ad2bc5fd not found: ID does not exist" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.517153 4890 scope.go:117] "RemoveContainer" containerID="ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.533572 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.537680 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "caae7093-b594-47fb-b863-38d825f0048d" (UID: "caae7093-b594-47fb-b863-38d825f0048d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.562794 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5z4qn"] Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.566145 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5z4qn"] Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.577279 4890 scope.go:117] "RemoveContainer" containerID="f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.599576 4890 scope.go:117] "RemoveContainer" containerID="ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2" Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.600001 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2\": container with ID starting with ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2 not found: ID does not exist" containerID="ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.600040 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2"} err="failed to get container status \"ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2\": rpc error: code = NotFound desc = could not find container \"ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2\": container with ID starting with ed1c947b35b5a4452a677ae8fa1f47ab8b281969aa9a7e049790e585cbaa8bd2 not found: ID does not exist" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.600065 4890 scope.go:117] "RemoveContainer" containerID="f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66" Jan 21 15:57:05 crc kubenswrapper[4890]: E0121 15:57:05.600466 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66\": container with ID starting with f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66 not found: ID does not exist" containerID="f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.600546 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66"} err="failed to get container status \"f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66\": rpc error: code = NotFound desc = could not find container \"f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66\": container with ID starting with f3ecffa5f7df49b2823bdd5a3707d4b72825418e3cc16f97625733d64f0eaf66 not found: ID does not exist" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.611897 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-plugins-conf\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.611958 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-confd\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612004 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ss4g\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-kube-api-access-8ss4g\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612054 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612130 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bb9aa52-0895-418e-8e0b-d922948e85a7-erlang-cookie-secret\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612154 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-plugins\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612394 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bb9aa52-0895-418e-8e0b-d922948e85a7-pod-info\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612438 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-erlang-cookie\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612472 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-server-conf\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612497 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612521 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-tls\") pod \"9bb9aa52-0895-418e-8e0b-d922948e85a7\" (UID: \"9bb9aa52-0895-418e-8e0b-d922948e85a7\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612867 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.612888 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caae7093-b594-47fb-b863-38d825f0048d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.613016 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.614320 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.614613 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.615939 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.616051 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9bb9aa52-0895-418e-8e0b-d922948e85a7-pod-info" (OuterVolumeSpecName: "pod-info") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.619316 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb9aa52-0895-418e-8e0b-d922948e85a7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.628697 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.628832 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-kube-api-access-8ss4g" (OuterVolumeSpecName: "kube-api-access-8ss4g") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "kube-api-access-8ss4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.640606 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data" (OuterVolumeSpecName: "config-data") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.684423 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-server-conf" (OuterVolumeSpecName: "server-conf") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.710560 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9bb9aa52-0895-418e-8e0b-d922948e85a7" (UID: "9bb9aa52-0895-418e-8e0b-d922948e85a7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714634 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714666 4890 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bb9aa52-0895-418e-8e0b-d922948e85a7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714678 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714688 4890 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bb9aa52-0895-418e-8e0b-d922948e85a7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714699 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714709 4890 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714745 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714755 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714765 4890 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bb9aa52-0895-418e-8e0b-d922948e85a7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714774 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.714785 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ss4g\" (UniqueName: \"kubernetes.io/projected/9bb9aa52-0895-418e-8e0b-d922948e85a7-kube-api-access-8ss4g\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.729200 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.773398 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.778083 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.804698 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.816110 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.842770 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917111 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-public-tls-certs\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917190 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-config-data\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917276 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-scripts\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917374 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-combined-ca-bundle\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917398 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68w8r\" (UniqueName: \"kubernetes.io/projected/db0e4f67-3406-4153-9fb3-3553f6fccad1-kube-api-access-68w8r\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917506 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-internal-tls-certs\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917550 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-fernet-keys\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.917637 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-credential-keys\") pod \"db0e4f67-3406-4153-9fb3-3553f6fccad1\" (UID: \"db0e4f67-3406-4153-9fb3-3553f6fccad1\") " Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.922346 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.942073 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-scripts" (OuterVolumeSpecName: "scripts") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.945632 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2780ff06-b30a-43e8-97d5-b9477d2713d6" path="/var/lib/kubelet/pods/2780ff06-b30a-43e8-97d5-b9477d2713d6/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.949571 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" path="/var/lib/kubelet/pods/332f4b6c-7fea-4dae-bb46-3c35ee84ba25/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.949777 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.950829 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c99515-8e62-4e54-9ffc-e9294db2dc4f" path="/var/lib/kubelet/pods/50c99515-8e62-4e54-9ffc-e9294db2dc4f/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.952074 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770a4f11-b2a3-46fd-a06d-3af27edd3d9f" path="/var/lib/kubelet/pods/770a4f11-b2a3-46fd-a06d-3af27edd3d9f/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.953216 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe082f6-090f-4887-ab4a-cee13a8ad2a2" path="/var/lib/kubelet/pods/abe082f6-090f-4887-ab4a-cee13a8ad2a2/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.954262 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b402af9c-655e-4cd8-91a4-f9ff4f8ef671" path="/var/lib/kubelet/pods/b402af9c-655e-4cd8-91a4-f9ff4f8ef671/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.956466 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caae7093-b594-47fb-b863-38d825f0048d" path="/var/lib/kubelet/pods/caae7093-b594-47fb-b863-38d825f0048d/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.957844 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" path="/var/lib/kubelet/pods/cc7a8eb5-11e0-4603-b80a-3b4f6e724770/volumes" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.957943 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.969547 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0e4f67-3406-4153-9fb3-3553f6fccad1-kube-api-access-68w8r" (OuterVolumeSpecName: "kube-api-access-68w8r") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "kube-api-access-68w8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.976193 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-config-data" (OuterVolumeSpecName: "config-data") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:05 crc kubenswrapper[4890]: I0121 15:57:05.991455 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.019514 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db0e4f67-3406-4153-9fb3-3553f6fccad1" (UID: "db0e4f67-3406-4153-9fb3-3553f6fccad1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.019986 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.020019 4890 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.020028 4890 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.020036 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.020044 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.020052 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.020063 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0e4f67-3406-4153-9fb3-3553f6fccad1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.020071 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68w8r\" (UniqueName: \"kubernetes.io/projected/db0e4f67-3406-4153-9fb3-3553f6fccad1-kube-api-access-68w8r\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.431705 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.462736 4890 generic.go:334] "Generic (PLEG): container finished" podID="477ba084-e185-42c6-a0ae-f5de448a4d13" containerID="5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" exitCode=0 Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.462821 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"477ba084-e185-42c6-a0ae-f5de448a4d13","Type":"ContainerDied","Data":"5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6"} Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.462855 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"477ba084-e185-42c6-a0ae-f5de448a4d13","Type":"ContainerDied","Data":"250a8698801696d01d4a92a1011062b3b5e5387c8bc148dac28b63c9ccafca8d"} Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.462874 4890 scope.go:117] "RemoveContainer" containerID="5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.463018 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.467891 4890 generic.go:334] "Generic (PLEG): container finished" podID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerID="bdcf29add7cbc483a28d49a26883018699ca78c8f8bcfbac6388fbdd8fd5c94b" exitCode=0 Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.467944 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerDied","Data":"bdcf29add7cbc483a28d49a26883018699ca78c8f8bcfbac6388fbdd8fd5c94b"} Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.471054 4890 generic.go:334] "Generic (PLEG): container finished" podID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerID="2ca05563eab7c7837a3f0611a032f1c0a8bc338b86d2e64c4be0a14c487366e0" exitCode=0 Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.471120 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5969dffb49-ng442" event={"ID":"d3466f4b-2d63-490d-bae0-0921a4874daa","Type":"ContainerDied","Data":"2ca05563eab7c7837a3f0611a032f1c0a8bc338b86d2e64c4be0a14c487366e0"} Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.476387 4890 generic.go:334] "Generic (PLEG): container finished" podID="0365c802-8af2-4230-a2e7-90959d273419" containerID="9d30851de1888098b6eefb06ebbe23168f3d78920011b9933b08aff11f05029f" exitCode=0 Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.476473 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" event={"ID":"0365c802-8af2-4230-a2e7-90959d273419","Type":"ContainerDied","Data":"9d30851de1888098b6eefb06ebbe23168f3d78920011b9933b08aff11f05029f"} Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.477709 4890 generic.go:334] "Generic (PLEG): container finished" podID="db0e4f67-3406-4153-9fb3-3553f6fccad1" containerID="ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f" exitCode=0 Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.477762 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d6cd7788b-hrbst" event={"ID":"db0e4f67-3406-4153-9fb3-3553f6fccad1","Type":"ContainerDied","Data":"ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f"} Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.477795 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d6cd7788b-hrbst" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.477805 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d6cd7788b-hrbst" event={"ID":"db0e4f67-3406-4153-9fb3-3553f6fccad1","Type":"ContainerDied","Data":"ba23d8edb5f63f3200279a3e78acf00fc988ae93e6b1b092da859323084799f0"} Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.477810 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.510121 4890 scope.go:117] "RemoveContainer" containerID="5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" Jan 21 15:57:06 crc kubenswrapper[4890]: E0121 15:57:06.513360 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6\": container with ID starting with 5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6 not found: ID does not exist" containerID="5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.513410 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6"} err="failed to get container status \"5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6\": rpc error: code = NotFound desc = could not find container \"5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6\": container with ID starting with 5509224e7b8f251f2bd011bb38c58c46dfe6c022ddf2a1120fea9d63aab3c2b6 not found: ID does not exist" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.513439 4890 scope.go:117] "RemoveContainer" containerID="ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.514762 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.525570 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.529472 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-config-data\") pod \"477ba084-e185-42c6-a0ae-f5de448a4d13\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.529514 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq528\" (UniqueName: \"kubernetes.io/projected/477ba084-e185-42c6-a0ae-f5de448a4d13-kube-api-access-nq528\") pod \"477ba084-e185-42c6-a0ae-f5de448a4d13\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.529614 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-combined-ca-bundle\") pod \"477ba084-e185-42c6-a0ae-f5de448a4d13\" (UID: \"477ba084-e185-42c6-a0ae-f5de448a4d13\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.535373 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477ba084-e185-42c6-a0ae-f5de448a4d13-kube-api-access-nq528" (OuterVolumeSpecName: "kube-api-access-nq528") pod "477ba084-e185-42c6-a0ae-f5de448a4d13" (UID: "477ba084-e185-42c6-a0ae-f5de448a4d13"). InnerVolumeSpecName "kube-api-access-nq528". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.540093 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5d6cd7788b-hrbst"] Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.540174 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5d6cd7788b-hrbst"] Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.556267 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "477ba084-e185-42c6-a0ae-f5de448a4d13" (UID: "477ba084-e185-42c6-a0ae-f5de448a4d13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.567455 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-config-data" (OuterVolumeSpecName: "config-data") pod "477ba084-e185-42c6-a0ae-f5de448a4d13" (UID: "477ba084-e185-42c6-a0ae-f5de448a4d13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.573768 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.585167 4890 scope.go:117] "RemoveContainer" containerID="ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f" Jan 21 15:57:06 crc kubenswrapper[4890]: E0121 15:57:06.585813 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f\": container with ID starting with ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f not found: ID does not exist" containerID="ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.585848 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f"} err="failed to get container status \"ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f\": rpc error: code = NotFound desc = could not find container \"ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f\": container with ID starting with ded49d6352122985279dbd202990dfc6d4e01b5bb75ed1d35c66ef6ffce32c4f not found: ID does not exist" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.631508 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.631544 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq528\" (UniqueName: \"kubernetes.io/projected/477ba084-e185-42c6-a0ae-f5de448a4d13-kube-api-access-nq528\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.631555 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ba084-e185-42c6-a0ae-f5de448a4d13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.631860 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.664624 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732510 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data-custom\") pod \"0365c802-8af2-4230-a2e7-90959d273419\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732638 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-config-data\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732712 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-sg-core-conf-yaml\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732736 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-log-httpd\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732761 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data\") pod \"0365c802-8af2-4230-a2e7-90959d273419\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732840 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0365c802-8af2-4230-a2e7-90959d273419-logs\") pod \"0365c802-8af2-4230-a2e7-90959d273419\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732865 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-run-httpd\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732895 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-scripts\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732928 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-combined-ca-bundle\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732958 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-combined-ca-bundle\") pod \"0365c802-8af2-4230-a2e7-90959d273419\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.732987 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-ceilometer-tls-certs\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.733036 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn4lw\" (UniqueName: \"kubernetes.io/projected/ff7b18fd-53f0-48dc-84ae-d706234668f7-kube-api-access-wn4lw\") pod \"ff7b18fd-53f0-48dc-84ae-d706234668f7\" (UID: \"ff7b18fd-53f0-48dc-84ae-d706234668f7\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.733069 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x94t\" (UniqueName: \"kubernetes.io/projected/0365c802-8af2-4230-a2e7-90959d273419-kube-api-access-9x94t\") pod \"0365c802-8af2-4230-a2e7-90959d273419\" (UID: \"0365c802-8af2-4230-a2e7-90959d273419\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.734616 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0365c802-8af2-4230-a2e7-90959d273419-logs" (OuterVolumeSpecName: "logs") pod "0365c802-8af2-4230-a2e7-90959d273419" (UID: "0365c802-8af2-4230-a2e7-90959d273419"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.737841 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.738244 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0365c802-8af2-4230-a2e7-90959d273419-kube-api-access-9x94t" (OuterVolumeSpecName: "kube-api-access-9x94t") pod "0365c802-8af2-4230-a2e7-90959d273419" (UID: "0365c802-8af2-4230-a2e7-90959d273419"). InnerVolumeSpecName "kube-api-access-9x94t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.738249 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0365c802-8af2-4230-a2e7-90959d273419" (UID: "0365c802-8af2-4230-a2e7-90959d273419"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.738901 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.742552 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7b18fd-53f0-48dc-84ae-d706234668f7-kube-api-access-wn4lw" (OuterVolumeSpecName: "kube-api-access-wn4lw") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "kube-api-access-wn4lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.744423 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-scripts" (OuterVolumeSpecName: "scripts") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.759297 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.760121 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0365c802-8af2-4230-a2e7-90959d273419" (UID: "0365c802-8af2-4230-a2e7-90959d273419"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.780451 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data" (OuterVolumeSpecName: "config-data") pod "0365c802-8af2-4230-a2e7-90959d273419" (UID: "0365c802-8af2-4230-a2e7-90959d273419"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.780978 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.799847 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.804659 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.805669 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.833794 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-config-data" (OuterVolumeSpecName: "config-data") pod "ff7b18fd-53f0-48dc-84ae-d706234668f7" (UID: "ff7b18fd-53f0-48dc-84ae-d706234668f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834308 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm2s6\" (UniqueName: \"kubernetes.io/projected/d3466f4b-2d63-490d-bae0-0921a4874daa-kube-api-access-hm2s6\") pod \"d3466f4b-2d63-490d-bae0-0921a4874daa\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834440 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data-custom\") pod \"d3466f4b-2d63-490d-bae0-0921a4874daa\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834488 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-combined-ca-bundle\") pod \"d3466f4b-2d63-490d-bae0-0921a4874daa\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834550 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3466f4b-2d63-490d-bae0-0921a4874daa-logs\") pod \"d3466f4b-2d63-490d-bae0-0921a4874daa\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834592 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data\") pod \"d3466f4b-2d63-490d-bae0-0921a4874daa\" (UID: \"d3466f4b-2d63-490d-bae0-0921a4874daa\") " Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834873 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0365c802-8af2-4230-a2e7-90959d273419-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834897 4890 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834909 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834920 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834936 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834949 4890 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834958 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn4lw\" (UniqueName: \"kubernetes.io/projected/ff7b18fd-53f0-48dc-84ae-d706234668f7-kube-api-access-wn4lw\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834967 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x94t\" (UniqueName: \"kubernetes.io/projected/0365c802-8af2-4230-a2e7-90959d273419-kube-api-access-9x94t\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834975 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834984 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.834995 4890 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff7b18fd-53f0-48dc-84ae-d706234668f7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.835007 4890 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff7b18fd-53f0-48dc-84ae-d706234668f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.835018 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0365c802-8af2-4230-a2e7-90959d273419-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.835884 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3466f4b-2d63-490d-bae0-0921a4874daa-logs" (OuterVolumeSpecName: "logs") pod "d3466f4b-2d63-490d-bae0-0921a4874daa" (UID: "d3466f4b-2d63-490d-bae0-0921a4874daa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.837224 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3466f4b-2d63-490d-bae0-0921a4874daa-kube-api-access-hm2s6" (OuterVolumeSpecName: "kube-api-access-hm2s6") pod "d3466f4b-2d63-490d-bae0-0921a4874daa" (UID: "d3466f4b-2d63-490d-bae0-0921a4874daa"). InnerVolumeSpecName "kube-api-access-hm2s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.837314 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d3466f4b-2d63-490d-bae0-0921a4874daa" (UID: "d3466f4b-2d63-490d-bae0-0921a4874daa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.851535 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3466f4b-2d63-490d-bae0-0921a4874daa" (UID: "d3466f4b-2d63-490d-bae0-0921a4874daa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.869793 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data" (OuterVolumeSpecName: "config-data") pod "d3466f4b-2d63-490d-bae0-0921a4874daa" (UID: "d3466f4b-2d63-490d-bae0-0921a4874daa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.936146 4890 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.936186 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.936196 4890 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3466f4b-2d63-490d-bae0-0921a4874daa-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.936207 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3466f4b-2d63-490d-bae0-0921a4874daa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:06 crc kubenswrapper[4890]: I0121 15:57:06.936216 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm2s6\" (UniqueName: \"kubernetes.io/projected/d3466f4b-2d63-490d-bae0-0921a4874daa-kube-api-access-hm2s6\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.489644 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.489663 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846846cd4b-wmjvw" event={"ID":"0365c802-8af2-4230-a2e7-90959d273419","Type":"ContainerDied","Data":"06150ec105d0e875a593ee4b4872021c717cf9b30faea083da88f68204d4e18f"} Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.490209 4890 scope.go:117] "RemoveContainer" containerID="9d30851de1888098b6eefb06ebbe23168f3d78920011b9933b08aff11f05029f" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.493451 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.166:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.500698 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.501606 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff7b18fd-53f0-48dc-84ae-d706234668f7","Type":"ContainerDied","Data":"22dd1fc6533ed401198587d5331c0f4e5f601325cc551eacae08824d4cf245b4"} Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.506875 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5969dffb49-ng442" event={"ID":"d3466f4b-2d63-490d-bae0-0921a4874daa","Type":"ContainerDied","Data":"b38b5499e0e0b88d904a91a2076a29815438fe843f3eec5e66a7607c22417f8b"} Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.507118 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5969dffb49-ng442" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.521216 4890 scope.go:117] "RemoveContainer" containerID="c2f49312d4e89e99e690840cb5a943c78b8a717a32ae94d1b8fa6f3f50c660c1" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.525769 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-846846cd4b-wmjvw"] Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.534677 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-846846cd4b-wmjvw"] Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.541188 4890 scope.go:117] "RemoveContainer" containerID="b1850cb5e39351073cf39f1d0e88018e7526c6b8091783f112754a2815cb88bf" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.541620 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.552203 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.559154 4890 scope.go:117] "RemoveContainer" containerID="e6874597d1e13caa14de2a102072cb91ab0359d88ae4e3beb3a5adaa31d395bd" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.570662 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5969dffb49-ng442"] Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.577705 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5969dffb49-ng442"] Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.579267 4890 scope.go:117] "RemoveContainer" containerID="bdcf29add7cbc483a28d49a26883018699ca78c8f8bcfbac6388fbdd8fd5c94b" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.599231 4890 scope.go:117] "RemoveContainer" containerID="2660277560aad838dbebdfb2cd900cfc69db1d476e814c29ad6367cf3448c4ee" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.623175 4890 scope.go:117] "RemoveContainer" containerID="2ca05563eab7c7837a3f0611a032f1c0a8bc338b86d2e64c4be0a14c487366e0" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.693545 4890 scope.go:117] "RemoveContainer" containerID="fec4b0c0a2231fb8d38d939d55a6826e9794606484374bcac4d37face3381fe7" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.921430 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0365c802-8af2-4230-a2e7-90959d273419" path="/var/lib/kubelet/pods/0365c802-8af2-4230-a2e7-90959d273419/volumes" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.922139 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477ba084-e185-42c6-a0ae-f5de448a4d13" path="/var/lib/kubelet/pods/477ba084-e185-42c6-a0ae-f5de448a4d13/volumes" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.922921 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" path="/var/lib/kubelet/pods/9bb9aa52-0895-418e-8e0b-d922948e85a7/volumes" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.924105 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" path="/var/lib/kubelet/pods/d3466f4b-2d63-490d-bae0-0921a4874daa/volumes" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.924772 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db0e4f67-3406-4153-9fb3-3553f6fccad1" path="/var/lib/kubelet/pods/db0e4f67-3406-4153-9fb3-3553f6fccad1/volumes" Jan 21 15:57:07 crc kubenswrapper[4890]: I0121 15:57:07.925384 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" path="/var/lib/kubelet/pods/ff7b18fd-53f0-48dc-84ae-d706234668f7/volumes" Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.275161 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.276130 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.276146 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.276650 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.276687 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.279419 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.281057 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:10 crc kubenswrapper[4890]: E0121 15:57:10.281105 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.278569 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.278577 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.282127 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.282278 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.283649 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.283688 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.295628 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:15 crc kubenswrapper[4890]: E0121 15:57:15.295718 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.546660 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.596756 4890 generic.go:334] "Generic (PLEG): container finished" podID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerID="7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322" exitCode=0 Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.596800 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5585884bc-vnz4h" event={"ID":"902e1b21-9fb7-4302-b0f7-a832c7a42ca1","Type":"ContainerDied","Data":"7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322"} Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.596806 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5585884bc-vnz4h" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.596842 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5585884bc-vnz4h" event={"ID":"902e1b21-9fb7-4302-b0f7-a832c7a42ca1","Type":"ContainerDied","Data":"b9fb54e5f3f01a90ef7fef579e16b77f15fdb5d1bc2a563906f67031e84995b7"} Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.596858 4890 scope.go:117] "RemoveContainer" containerID="af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.614744 4890 scope.go:117] "RemoveContainer" containerID="7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.631779 4890 scope.go:117] "RemoveContainer" containerID="af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503" Jan 21 15:57:16 crc kubenswrapper[4890]: E0121 15:57:16.632252 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503\": container with ID starting with af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503 not found: ID does not exist" containerID="af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.632280 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503"} err="failed to get container status \"af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503\": rpc error: code = NotFound desc = could not find container \"af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503\": container with ID starting with af23ab036c3237007e6021ce79fe478a85cbaed5fa1ea44694cb29f8004f2503 not found: ID does not exist" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.632301 4890 scope.go:117] "RemoveContainer" containerID="7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322" Jan 21 15:57:16 crc kubenswrapper[4890]: E0121 15:57:16.632635 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322\": container with ID starting with 7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322 not found: ID does not exist" containerID="7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.632678 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322"} err="failed to get container status \"7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322\": rpc error: code = NotFound desc = could not find container \"7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322\": container with ID starting with 7807589e59170aafd28271bf151bc8be0c183675eff5c789cf6ee856a210f322 not found: ID does not exist" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.682132 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-httpd-config\") pod \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.682197 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-config\") pod \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.682220 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-public-tls-certs\") pod \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.682240 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-combined-ca-bundle\") pod \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.682325 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-ovndb-tls-certs\") pod \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.682392 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-internal-tls-certs\") pod \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.682412 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w64p\" (UniqueName: \"kubernetes.io/projected/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-kube-api-access-2w64p\") pod \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\" (UID: \"902e1b21-9fb7-4302-b0f7-a832c7a42ca1\") " Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.706498 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-kube-api-access-2w64p" (OuterVolumeSpecName: "kube-api-access-2w64p") pod "902e1b21-9fb7-4302-b0f7-a832c7a42ca1" (UID: "902e1b21-9fb7-4302-b0f7-a832c7a42ca1"). InnerVolumeSpecName "kube-api-access-2w64p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.706508 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "902e1b21-9fb7-4302-b0f7-a832c7a42ca1" (UID: "902e1b21-9fb7-4302-b0f7-a832c7a42ca1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.726569 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "902e1b21-9fb7-4302-b0f7-a832c7a42ca1" (UID: "902e1b21-9fb7-4302-b0f7-a832c7a42ca1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.730567 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "902e1b21-9fb7-4302-b0f7-a832c7a42ca1" (UID: "902e1b21-9fb7-4302-b0f7-a832c7a42ca1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.738735 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-config" (OuterVolumeSpecName: "config") pod "902e1b21-9fb7-4302-b0f7-a832c7a42ca1" (UID: "902e1b21-9fb7-4302-b0f7-a832c7a42ca1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.744022 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "902e1b21-9fb7-4302-b0f7-a832c7a42ca1" (UID: "902e1b21-9fb7-4302-b0f7-a832c7a42ca1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.757897 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "902e1b21-9fb7-4302-b0f7-a832c7a42ca1" (UID: "902e1b21-9fb7-4302-b0f7-a832c7a42ca1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.786719 4890 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.786768 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w64p\" (UniqueName: \"kubernetes.io/projected/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-kube-api-access-2w64p\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.786783 4890 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.786793 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.786804 4890 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.786813 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.786820 4890 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/902e1b21-9fb7-4302-b0f7-a832c7a42ca1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.933685 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5585884bc-vnz4h"] Jan 21 15:57:16 crc kubenswrapper[4890]: I0121 15:57:16.939450 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5585884bc-vnz4h"] Jan 21 15:57:17 crc kubenswrapper[4890]: I0121 15:57:17.923083 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" path="/var/lib/kubelet/pods/902e1b21-9fb7-4302-b0f7-a832c7a42ca1/volumes" Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.275588 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.275915 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.276283 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.276321 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.276451 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.277839 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.279291 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:20 crc kubenswrapper[4890]: E0121 15:57:20.279372 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.275486 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.276334 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.276584 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.276681 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.276707 4890 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.278571 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.279992 4890 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 15:57:25 crc kubenswrapper[4890]: E0121 15:57:25.280026 4890 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-dfk6x" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:27 crc kubenswrapper[4890]: I0121 15:57:27.698768 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dfk6x_233162f3-fe28-4476-bc40-eb4b138ae68a/ovs-vswitchd/0.log" Jan 21 15:57:27 crc kubenswrapper[4890]: I0121 15:57:27.700293 4890 generic.go:334] "Generic (PLEG): container finished" podID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" exitCode=137 Jan 21 15:57:27 crc kubenswrapper[4890]: I0121 15:57:27.700334 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dfk6x" event={"ID":"233162f3-fe28-4476-bc40-eb4b138ae68a","Type":"ContainerDied","Data":"3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c"} Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.626957 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dfk6x_233162f3-fe28-4476-bc40-eb4b138ae68a/ovs-vswitchd/0.log" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.628160 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.721966 4890 generic.go:334] "Generic (PLEG): container finished" podID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerID="d353b883ad9d704cf38a51820b942338cdd8c742501c227a8140207f662015e8" exitCode=137 Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.723228 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"d353b883ad9d704cf38a51820b942338cdd8c742501c227a8140207f662015e8"} Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.737862 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dfk6x_233162f3-fe28-4476-bc40-eb4b138ae68a/ovs-vswitchd/0.log" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.739222 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dfk6x" event={"ID":"233162f3-fe28-4476-bc40-eb4b138ae68a","Type":"ContainerDied","Data":"52b1f6dfa2942f85834acf1faaac5170191479c15990d3f6453b2be4099fb535"} Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.739278 4890 scope.go:117] "RemoveContainer" containerID="3763ddf89d1d603852086f65e8a0747a04a1931332a37db7d32a0f7740b6233c" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.739461 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dfk6x" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760075 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-lib\") pod \"233162f3-fe28-4476-bc40-eb4b138ae68a\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760299 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-lib" (OuterVolumeSpecName: "var-lib") pod "233162f3-fe28-4476-bc40-eb4b138ae68a" (UID: "233162f3-fe28-4476-bc40-eb4b138ae68a"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760387 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-etc-ovs\") pod \"233162f3-fe28-4476-bc40-eb4b138ae68a\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760421 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-run\") pod \"233162f3-fe28-4476-bc40-eb4b138ae68a\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760469 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233162f3-fe28-4476-bc40-eb4b138ae68a-scripts\") pod \"233162f3-fe28-4476-bc40-eb4b138ae68a\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760497 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-log\") pod \"233162f3-fe28-4476-bc40-eb4b138ae68a\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760530 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j58pt\" (UniqueName: \"kubernetes.io/projected/233162f3-fe28-4476-bc40-eb4b138ae68a-kube-api-access-j58pt\") pod \"233162f3-fe28-4476-bc40-eb4b138ae68a\" (UID: \"233162f3-fe28-4476-bc40-eb4b138ae68a\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760741 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-run" (OuterVolumeSpecName: "var-run") pod "233162f3-fe28-4476-bc40-eb4b138ae68a" (UID: "233162f3-fe28-4476-bc40-eb4b138ae68a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.760790 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "233162f3-fe28-4476-bc40-eb4b138ae68a" (UID: "233162f3-fe28-4476-bc40-eb4b138ae68a"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.761227 4890 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.761243 4890 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.761254 4890 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.761282 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-log" (OuterVolumeSpecName: "var-log") pod "233162f3-fe28-4476-bc40-eb4b138ae68a" (UID: "233162f3-fe28-4476-bc40-eb4b138ae68a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.762528 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/233162f3-fe28-4476-bc40-eb4b138ae68a-scripts" (OuterVolumeSpecName: "scripts") pod "233162f3-fe28-4476-bc40-eb4b138ae68a" (UID: "233162f3-fe28-4476-bc40-eb4b138ae68a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.767253 4890 scope.go:117] "RemoveContainer" containerID="283f0b20e296293f3bfde09844b3fc251fe595f6f266c9f054604da3336ca97f" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.769537 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/233162f3-fe28-4476-bc40-eb4b138ae68a-kube-api-access-j58pt" (OuterVolumeSpecName: "kube-api-access-j58pt") pod "233162f3-fe28-4476-bc40-eb4b138ae68a" (UID: "233162f3-fe28-4476-bc40-eb4b138ae68a"). InnerVolumeSpecName "kube-api-access-j58pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.839341 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.843285 4890 scope.go:117] "RemoveContainer" containerID="3575042dd8f2422aba0e5359772f4de6498b60c970bb53645ccc0512d6212730" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.862486 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233162f3-fe28-4476-bc40-eb4b138ae68a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.862521 4890 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/233162f3-fe28-4476-bc40-eb4b138ae68a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.862531 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j58pt\" (UniqueName: \"kubernetes.io/projected/233162f3-fe28-4476-bc40-eb4b138ae68a-kube-api-access-j58pt\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.963879 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-lock\") pod \"e7d46fba-02db-42e1-a916-1b2528bbdd52\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.964209 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9xx\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-kube-api-access-qs9xx\") pod \"e7d46fba-02db-42e1-a916-1b2528bbdd52\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.964237 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"e7d46fba-02db-42e1-a916-1b2528bbdd52\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.964297 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") pod \"e7d46fba-02db-42e1-a916-1b2528bbdd52\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.964342 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-cache\") pod \"e7d46fba-02db-42e1-a916-1b2528bbdd52\" (UID: \"e7d46fba-02db-42e1-a916-1b2528bbdd52\") " Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.965409 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-cache" (OuterVolumeSpecName: "cache") pod "e7d46fba-02db-42e1-a916-1b2528bbdd52" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.965933 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-lock" (OuterVolumeSpecName: "lock") pod "e7d46fba-02db-42e1-a916-1b2528bbdd52" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.967485 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-kube-api-access-qs9xx" (OuterVolumeSpecName: "kube-api-access-qs9xx") pod "e7d46fba-02db-42e1-a916-1b2528bbdd52" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52"). InnerVolumeSpecName "kube-api-access-qs9xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.967915 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "e7d46fba-02db-42e1-a916-1b2528bbdd52" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:57:28 crc kubenswrapper[4890]: I0121 15:57:28.968266 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e7d46fba-02db-42e1-a916-1b2528bbdd52" (UID: "e7d46fba-02db-42e1-a916-1b2528bbdd52"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.065665 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9xx\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-kube-api-access-qs9xx\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.065729 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.065744 4890 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e7d46fba-02db-42e1-a916-1b2528bbdd52-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.065754 4890 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-cache\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.065767 4890 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e7d46fba-02db-42e1-a916-1b2528bbdd52-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.070496 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-dfk6x"] Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.076275 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-dfk6x"] Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.081791 4890 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.167167 4890 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.754269 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e7d46fba-02db-42e1-a916-1b2528bbdd52","Type":"ContainerDied","Data":"a32af2399b775ce1308942e9ee1c941e38291012df640443686b78732e95ac5f"} Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.754320 4890 scope.go:117] "RemoveContainer" containerID="d353b883ad9d704cf38a51820b942338cdd8c742501c227a8140207f662015e8" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.754470 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.777235 4890 scope.go:117] "RemoveContainer" containerID="02a34f2bdfeb043480bedf1700ad25535feb47fbbf2cc661cbb62aad70e40a3b" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.794893 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.799797 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.803635 4890 scope.go:117] "RemoveContainer" containerID="22335d0f4d49f32620ca48289dd4eb408b7f064e87d7877cc89abf517378da85" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.820055 4890 scope.go:117] "RemoveContainer" containerID="1df25c4313e8f39ad26d3ec8a848f850a004e7acdea809912d27022424ac0fec" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.839673 4890 scope.go:117] "RemoveContainer" containerID="291b43ebb5749379f57dbecf17da84aa48983e3db96591d9b7e0aa8d76cc1621" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.858953 4890 scope.go:117] "RemoveContainer" containerID="7c5460ff3a431a21df2a718e89dbf2a5a523b0ee5fdfadf49395a1b74d24c6ab" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.880157 4890 scope.go:117] "RemoveContainer" containerID="15ae8d44e4e537260de3b6431b223bf85ce1e10d4762ac9a192b7a7606fb94e3" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.898642 4890 scope.go:117] "RemoveContainer" containerID="8616884f18e315e3258c25763c5c8cdaea184dc25ba69e7d8e0fa91ac49eaa89" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.920549 4890 scope.go:117] "RemoveContainer" containerID="520ea43d4d0b04096ca36e892322861f691a6670e78931f59f2ea9d885179af5" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.926756 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" path="/var/lib/kubelet/pods/233162f3-fe28-4476-bc40-eb4b138ae68a/volumes" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.927570 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" path="/var/lib/kubelet/pods/e7d46fba-02db-42e1-a916-1b2528bbdd52/volumes" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.938886 4890 scope.go:117] "RemoveContainer" containerID="ae1658689b220e377c8fba9958351f538aaba5502635f74cadc260a696a44a6f" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.959731 4890 scope.go:117] "RemoveContainer" containerID="5fa5e2d9ca2571b7361e659ef85544eb30c548cf9527ac1a3be6a7a829e8fbee" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.975647 4890 scope.go:117] "RemoveContainer" containerID="56a854520d26c749a116af4b530898a508240c3791da8d8b127790fb93dfdcc0" Jan 21 15:57:29 crc kubenswrapper[4890]: I0121 15:57:29.992200 4890 scope.go:117] "RemoveContainer" containerID="b12bd693bb7580997fa08c163b6c91d65afd3c016d9dbb69b3a75a78a8a917e1" Jan 21 15:57:30 crc kubenswrapper[4890]: I0121 15:57:30.011550 4890 scope.go:117] "RemoveContainer" containerID="ec758b8a6824700021b92bcf01c6881e87a7af7bbc0acf6895ec0b0549188a0c" Jan 21 15:57:30 crc kubenswrapper[4890]: I0121 15:57:30.028121 4890 scope.go:117] "RemoveContainer" containerID="044efc2d7955bb08fe4ff237c3a7e4e25d9ab4e72fa5d3faa7c58ac27561b350" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.766731 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qfvwl"] Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767056 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767075 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-server" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767094 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="sg-core" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767102 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="sg-core" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767113 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767120 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-server" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767133 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767140 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767147 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767153 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767164 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-reaper" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767170 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-reaper" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767182 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="swift-recon-cron" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767188 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="swift-recon-cron" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767198 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767203 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767212 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736df6ca-1308-4f87-a39e-7aca6ad4d5a1" containerName="kube-state-metrics" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767219 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="736df6ca-1308-4f87-a39e-7aca6ad4d5a1" containerName="kube-state-metrics" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767227 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767233 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767243 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767248 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767258 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-updater" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767264 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-updater" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767272 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-expirer" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767278 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-expirer" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767287 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767294 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-log" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767302 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767307 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767316 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767323 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767334 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767342 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767364 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="rsync" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767370 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="rsync" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767381 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerName="mysql-bootstrap" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767387 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerName="mysql-bootstrap" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767398 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767404 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767414 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server-init" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767421 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server-init" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767431 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767441 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-server" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767451 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477ba084-e185-42c6-a0ae-f5de448a4d13" containerName="nova-cell1-conductor-conductor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767458 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="477ba084-e185-42c6-a0ae-f5de448a4d13" containerName="nova-cell1-conductor-conductor" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767470 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2780ff06-b30a-43e8-97d5-b9477d2713d6" containerName="nova-scheduler-scheduler" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767476 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2780ff06-b30a-43e8-97d5-b9477d2713d6" containerName="nova-scheduler-scheduler" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767482 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerName="setup-container" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767488 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerName="setup-container" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767497 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767504 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-api" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767513 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerName="galera" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767520 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerName="galera" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767528 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767535 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767542 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767549 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767557 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767568 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767583 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerName="rabbitmq" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767590 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerName="rabbitmq" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767602 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-updater" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767609 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-updater" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767618 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767625 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker-log" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767634 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767641 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767652 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0e4f67-3406-4153-9fb3-3553f6fccad1" containerName="keystone-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767660 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0e4f67-3406-4153-9fb3-3553f6fccad1" containerName="keystone-api" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767673 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767681 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-log" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767691 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767698 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767710 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767716 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767727 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c99515-8e62-4e54-9ffc-e9294db2dc4f" containerName="nova-cell0-conductor-conductor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767734 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c99515-8e62-4e54-9ffc-e9294db2dc4f" containerName="nova-cell0-conductor-conductor" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767745 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767752 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767764 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767772 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767781 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-notification-agent" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767788 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-notification-agent" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767800 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767808 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener-log" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767820 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767830 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-api" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767840 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="proxy-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767849 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="proxy-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767865 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770a4f11-b2a3-46fd-a06d-3af27edd3d9f" containerName="memcached" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767874 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="770a4f11-b2a3-46fd-a06d-3af27edd3d9f" containerName="memcached" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767888 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="ovn-northd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767895 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="ovn-northd" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767908 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-central-agent" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767918 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-central-agent" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767928 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caae7093-b594-47fb-b863-38d825f0048d" containerName="setup-container" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767936 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="caae7093-b594-47fb-b863-38d825f0048d" containerName="setup-container" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767946 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="openstack-network-exporter" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767954 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="openstack-network-exporter" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767966 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767974 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-log" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.767988 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-metadata" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.767997 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-metadata" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.768010 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caae7093-b594-47fb-b863-38d825f0048d" containerName="rabbitmq" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768018 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="caae7093-b594-47fb-b863-38d825f0048d" containerName="rabbitmq" Jan 21 15:57:32 crc kubenswrapper[4890]: E0121 15:57:32.768028 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768036 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768199 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768215 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768229 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="swift-recon-cron" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768237 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="0365c802-8af2-4230-a2e7-90959d273419" containerName="barbican-keystone-listener" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768246 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="proxy-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768255 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb9aa52-0895-418e-8e0b-d922948e85a7" containerName="rabbitmq" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768267 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-expirer" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768278 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768291 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="770a4f11-b2a3-46fd-a06d-3af27edd3d9f" containerName="memcached" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768301 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768310 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="openstack-network-exporter" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768322 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="caae7093-b594-47fb-b863-38d825f0048d" containerName="rabbitmq" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768331 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c99515-8e62-4e54-9ffc-e9294db2dc4f" containerName="nova-cell0-conductor-conductor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768342 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-updater" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768372 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovs-vswitchd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768382 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768389 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="sg-core" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768396 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2780ff06-b30a-43e8-97d5-b9477d2713d6" containerName="nova-scheduler-scheduler" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768403 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768412 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="84118502-58f0-48b2-b659-7f748311fa22" containerName="nova-api-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768419 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768429 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768439 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768448 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768458 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bbda2a-fde6-466f-92c8-88556941b8a3" containerName="barbican-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768464 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e775a69e-619f-4920-8fc9-6d216e400c0e" containerName="glance-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768471 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768478 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768486 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="332f4b6c-7fea-4dae-bb46-3c35ee84ba25" containerName="ovn-northd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768495 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768506 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-central-agent" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768514 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-reaper" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768520 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0e4f67-3406-4153-9fb3-3553f6fccad1" containerName="keystone-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768528 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="rsync" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768535 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768542 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="477ba084-e185-42c6-a0ae-f5de448a4d13" containerName="nova-cell1-conductor-conductor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768550 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="object-updater" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768557 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768564 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker-log" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768570 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="account-replicator" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768577 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d46fba-02db-42e1-a916-1b2528bbdd52" containerName="container-auditor" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768584 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="902e1b21-9fb7-4302-b0f7-a832c7a42ca1" containerName="neutron-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768606 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="371fefce-bb16-4c48-ac5a-01885e77c090" containerName="cinder-api" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768615 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4099ef81-b3a1-4e17-af41-48813a488181" containerName="nova-metadata-metadata" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768623 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3466f4b-2d63-490d-bae0-0921a4874daa" containerName="barbican-worker" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768631 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="736df6ca-1308-4f87-a39e-7aca6ad4d5a1" containerName="kube-state-metrics" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768642 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="697e1d3a-fab0-471b-bea8-43212f489fec" containerName="glance-httpd" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768649 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="233162f3-fe28-4476-bc40-eb4b138ae68a" containerName="ovsdb-server" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768659 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7a8eb5-11e0-4603-b80a-3b4f6e724770" containerName="galera" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.768668 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7b18fd-53f0-48dc-84ae-d706234668f7" containerName="ceilometer-notification-agent" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.769764 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.784651 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfvwl"] Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.823507 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-catalog-content\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.823885 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-utilities\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.823995 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmz8\" (UniqueName: \"kubernetes.io/projected/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-kube-api-access-9dmz8\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.925304 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-catalog-content\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.925428 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-utilities\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.925450 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dmz8\" (UniqueName: \"kubernetes.io/projected/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-kube-api-access-9dmz8\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.926272 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-catalog-content\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.926562 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-utilities\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:32 crc kubenswrapper[4890]: I0121 15:57:32.945519 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dmz8\" (UniqueName: \"kubernetes.io/projected/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-kube-api-access-9dmz8\") pod \"redhat-marketplace-qfvwl\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:33 crc kubenswrapper[4890]: I0121 15:57:33.094870 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:33 crc kubenswrapper[4890]: I0121 15:57:33.521071 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfvwl"] Jan 21 15:57:33 crc kubenswrapper[4890]: I0121 15:57:33.796184 4890 generic.go:334] "Generic (PLEG): container finished" podID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerID="f7f1d19686fd76535516f298f8d65b15e075b8bf2ee3d555741f179ed67a65c8" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4890]: I0121 15:57:33.796227 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfvwl" event={"ID":"cc0e1e76-6373-4198-8930-da2a4ba3fb7a","Type":"ContainerDied","Data":"f7f1d19686fd76535516f298f8d65b15e075b8bf2ee3d555741f179ed67a65c8"} Jan 21 15:57:33 crc kubenswrapper[4890]: I0121 15:57:33.796265 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfvwl" event={"ID":"cc0e1e76-6373-4198-8930-da2a4ba3fb7a","Type":"ContainerStarted","Data":"04430a284a67ba8a78851b61a10a5089e2c73048786d1517b230073a54014291"} Jan 21 15:57:34 crc kubenswrapper[4890]: I0121 15:57:34.813595 4890 generic.go:334] "Generic (PLEG): container finished" podID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerID="94368036921e95f83bd16f5e357d5d672c24f094de719293c55e312da7551306" exitCode=0 Jan 21 15:57:34 crc kubenswrapper[4890]: I0121 15:57:34.813707 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfvwl" event={"ID":"cc0e1e76-6373-4198-8930-da2a4ba3fb7a","Type":"ContainerDied","Data":"94368036921e95f83bd16f5e357d5d672c24f094de719293c55e312da7551306"} Jan 21 15:57:35 crc kubenswrapper[4890]: I0121 15:57:35.825697 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfvwl" event={"ID":"cc0e1e76-6373-4198-8930-da2a4ba3fb7a","Type":"ContainerStarted","Data":"67d5ddc4341ec5d39058b797c1a1ee00ef37e769ed8a91f45483efd6b188d11e"} Jan 21 15:57:35 crc kubenswrapper[4890]: I0121 15:57:35.845052 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qfvwl" podStartSLOduration=2.443488659 podStartE2EDuration="3.845038204s" podCreationTimestamp="2026-01-21 15:57:32 +0000 UTC" firstStartedPulling="2026-01-21 15:57:33.797630887 +0000 UTC m=+1536.159073296" lastFinishedPulling="2026-01-21 15:57:35.199180432 +0000 UTC m=+1537.560622841" observedRunningTime="2026-01-21 15:57:35.839757073 +0000 UTC m=+1538.201199482" watchObservedRunningTime="2026-01-21 15:57:35.845038204 +0000 UTC m=+1538.206480613" Jan 21 15:57:43 crc kubenswrapper[4890]: I0121 15:57:43.095287 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:43 crc kubenswrapper[4890]: I0121 15:57:43.095938 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:43 crc kubenswrapper[4890]: I0121 15:57:43.145778 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:43 crc kubenswrapper[4890]: I0121 15:57:43.936473 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:43 crc kubenswrapper[4890]: I0121 15:57:43.983066 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfvwl"] Jan 21 15:57:45 crc kubenswrapper[4890]: I0121 15:57:45.906564 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qfvwl" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="registry-server" containerID="cri-o://67d5ddc4341ec5d39058b797c1a1ee00ef37e769ed8a91f45483efd6b188d11e" gracePeriod=2 Jan 21 15:57:46 crc kubenswrapper[4890]: I0121 15:57:46.920158 4890 generic.go:334] "Generic (PLEG): container finished" podID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerID="67d5ddc4341ec5d39058b797c1a1ee00ef37e769ed8a91f45483efd6b188d11e" exitCode=0 Jan 21 15:57:46 crc kubenswrapper[4890]: I0121 15:57:46.920207 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfvwl" event={"ID":"cc0e1e76-6373-4198-8930-da2a4ba3fb7a","Type":"ContainerDied","Data":"67d5ddc4341ec5d39058b797c1a1ee00ef37e769ed8a91f45483efd6b188d11e"} Jan 21 15:57:46 crc kubenswrapper[4890]: I0121 15:57:46.920237 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfvwl" event={"ID":"cc0e1e76-6373-4198-8930-da2a4ba3fb7a","Type":"ContainerDied","Data":"04430a284a67ba8a78851b61a10a5089e2c73048786d1517b230073a54014291"} Jan 21 15:57:46 crc kubenswrapper[4890]: I0121 15:57:46.920257 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04430a284a67ba8a78851b61a10a5089e2c73048786d1517b230073a54014291" Jan 21 15:57:46 crc kubenswrapper[4890]: I0121 15:57:46.943237 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.135125 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-utilities\") pod \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.135289 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-catalog-content\") pod \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.135532 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dmz8\" (UniqueName: \"kubernetes.io/projected/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-kube-api-access-9dmz8\") pod \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\" (UID: \"cc0e1e76-6373-4198-8930-da2a4ba3fb7a\") " Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.136513 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-utilities" (OuterVolumeSpecName: "utilities") pod "cc0e1e76-6373-4198-8930-da2a4ba3fb7a" (UID: "cc0e1e76-6373-4198-8930-da2a4ba3fb7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.143714 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-kube-api-access-9dmz8" (OuterVolumeSpecName: "kube-api-access-9dmz8") pod "cc0e1e76-6373-4198-8930-da2a4ba3fb7a" (UID: "cc0e1e76-6373-4198-8930-da2a4ba3fb7a"). InnerVolumeSpecName "kube-api-access-9dmz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.167559 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc0e1e76-6373-4198-8930-da2a4ba3fb7a" (UID: "cc0e1e76-6373-4198-8930-da2a4ba3fb7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.237104 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.237157 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.237174 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dmz8\" (UniqueName: \"kubernetes.io/projected/cc0e1e76-6373-4198-8930-da2a4ba3fb7a-kube-api-access-9dmz8\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.927385 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfvwl" Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.964250 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfvwl"] Jan 21 15:57:47 crc kubenswrapper[4890]: I0121 15:57:47.969653 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfvwl"] Jan 21 15:57:49 crc kubenswrapper[4890]: I0121 15:57:49.930320 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" path="/var/lib/kubelet/pods/cc0e1e76-6373-4198-8930-da2a4ba3fb7a/volumes" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.312701 4890 scope.go:117] "RemoveContainer" containerID="9c6bd3901ab21eb3a6a896f88c75422f37111da6dc7a9f7afacad059eda2c587" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.344773 4890 scope.go:117] "RemoveContainer" containerID="81766ea3441972edff851d9a6f741a258ff1e9c431c8076fa7bee5bc3e0d1416" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.379284 4890 scope.go:117] "RemoveContainer" containerID="58a6039e13c8c21e15265060210fffee78adef95926968b02f31c6424dc6e4e2" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.408831 4890 scope.go:117] "RemoveContainer" containerID="dd79ac2382ba4799fcbadddf89c4782f72ca122863877ab13c4b968f86b00abd" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.434413 4890 scope.go:117] "RemoveContainer" containerID="2e0931f58ca1a5d447ce95af0d130834497a2a72ee0697c47fe09bd70c3de541" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.463618 4890 scope.go:117] "RemoveContainer" containerID="24cdf8018a6fb626f9f7db1ab7f2df7363bbc909a1e0c7cf765077b74524c60b" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.489513 4890 scope.go:117] "RemoveContainer" containerID="66d6987ba827025ccf444c0f90c8245315d050819facad259928746279545f54" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.526475 4890 scope.go:117] "RemoveContainer" containerID="a2062123071fef8060317420f2552a337427a5f0cbff7ebea4e49477ef056fa3" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.555772 4890 scope.go:117] "RemoveContainer" containerID="ee8636883cf7ef685bc793e2761b19d6a77deb5c7898b985a0cc704d99683d91" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.576027 4890 scope.go:117] "RemoveContainer" containerID="b35ec4c89779e5eb7c02e3bb914d9b1910b675eef13a014bb8895f0526ae4e60" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.594806 4890 scope.go:117] "RemoveContainer" containerID="33df5a7bef461b044f5948e6d82a92f50152c168d4b306d9e90252ca8c70cd02" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.615914 4890 scope.go:117] "RemoveContainer" containerID="f20373fd78902e12450bffce9ba15c92cdf73c9b0b39d7a93161e3fb5d6f7984" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.634609 4890 scope.go:117] "RemoveContainer" containerID="81ce73a5febd9e0238771e586ff5419d5d560723626eb34cc9b2725d3302a763" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.667419 4890 scope.go:117] "RemoveContainer" containerID="3a4def230c0d590d35ba17ed3b1707460c9afd259275943b38f2410a56c09911" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.961334 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h9k97"] Jan 21 15:58:15 crc kubenswrapper[4890]: E0121 15:58:15.961871 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="registry-server" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.961896 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="registry-server" Jan 21 15:58:15 crc kubenswrapper[4890]: E0121 15:58:15.961916 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="extract-utilities" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.961927 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="extract-utilities" Jan 21 15:58:15 crc kubenswrapper[4890]: E0121 15:58:15.961942 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="extract-content" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.961950 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="extract-content" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.964523 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0e1e76-6373-4198-8930-da2a4ba3fb7a" containerName="registry-server" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.965767 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:15 crc kubenswrapper[4890]: I0121 15:58:15.966675 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9k97"] Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.146922 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2bj2\" (UniqueName: \"kubernetes.io/projected/45093202-44a4-400f-9f69-fe2f408aba3c-kube-api-access-j2bj2\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.147045 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-catalog-content\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.147076 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-utilities\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.248636 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-catalog-content\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.248696 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-utilities\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.248762 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2bj2\" (UniqueName: \"kubernetes.io/projected/45093202-44a4-400f-9f69-fe2f408aba3c-kube-api-access-j2bj2\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.249250 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-catalog-content\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.249642 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-utilities\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.275190 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2bj2\" (UniqueName: \"kubernetes.io/projected/45093202-44a4-400f-9f69-fe2f408aba3c-kube-api-access-j2bj2\") pod \"certified-operators-h9k97\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.281542 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:16 crc kubenswrapper[4890]: I0121 15:58:16.793503 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9k97"] Jan 21 15:58:17 crc kubenswrapper[4890]: I0121 15:58:17.171126 4890 generic.go:334] "Generic (PLEG): container finished" podID="45093202-44a4-400f-9f69-fe2f408aba3c" containerID="a7de57ef74f2516f9da45fb3e1695679cba8e3d14b11edb45f9daa2a6d082981" exitCode=0 Jan 21 15:58:17 crc kubenswrapper[4890]: I0121 15:58:17.171167 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9k97" event={"ID":"45093202-44a4-400f-9f69-fe2f408aba3c","Type":"ContainerDied","Data":"a7de57ef74f2516f9da45fb3e1695679cba8e3d14b11edb45f9daa2a6d082981"} Jan 21 15:58:17 crc kubenswrapper[4890]: I0121 15:58:17.171457 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9k97" event={"ID":"45093202-44a4-400f-9f69-fe2f408aba3c","Type":"ContainerStarted","Data":"5d2331d64c25a0b1b6f691e139ee92aaa184e6b46258317396b55fcfb4dedcd8"} Jan 21 15:58:19 crc kubenswrapper[4890]: I0121 15:58:19.191659 4890 generic.go:334] "Generic (PLEG): container finished" podID="45093202-44a4-400f-9f69-fe2f408aba3c" containerID="e3e8690ec52dc7a14755d399327bad9ffbeaf4297cc93dca27ed0a28a76499df" exitCode=0 Jan 21 15:58:19 crc kubenswrapper[4890]: I0121 15:58:19.191706 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9k97" event={"ID":"45093202-44a4-400f-9f69-fe2f408aba3c","Type":"ContainerDied","Data":"e3e8690ec52dc7a14755d399327bad9ffbeaf4297cc93dca27ed0a28a76499df"} Jan 21 15:58:20 crc kubenswrapper[4890]: I0121 15:58:20.205813 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9k97" event={"ID":"45093202-44a4-400f-9f69-fe2f408aba3c","Type":"ContainerStarted","Data":"7b1091e87f1242fe364df32b45ae462b4aea4cff74180f009bdb27ff722ca8bb"} Jan 21 15:58:20 crc kubenswrapper[4890]: I0121 15:58:20.226096 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h9k97" podStartSLOduration=2.80782764 podStartE2EDuration="5.226074946s" podCreationTimestamp="2026-01-21 15:58:15 +0000 UTC" firstStartedPulling="2026-01-21 15:58:17.180710595 +0000 UTC m=+1579.542153004" lastFinishedPulling="2026-01-21 15:58:19.598957901 +0000 UTC m=+1581.960400310" observedRunningTime="2026-01-21 15:58:20.221745518 +0000 UTC m=+1582.583187937" watchObservedRunningTime="2026-01-21 15:58:20.226074946 +0000 UTC m=+1582.587517355" Jan 21 15:58:26 crc kubenswrapper[4890]: I0121 15:58:26.282317 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:26 crc kubenswrapper[4890]: I0121 15:58:26.282965 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:26 crc kubenswrapper[4890]: I0121 15:58:26.331418 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:27 crc kubenswrapper[4890]: I0121 15:58:27.307103 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:27 crc kubenswrapper[4890]: I0121 15:58:27.367540 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9k97"] Jan 21 15:58:29 crc kubenswrapper[4890]: I0121 15:58:29.281029 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h9k97" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="registry-server" containerID="cri-o://7b1091e87f1242fe364df32b45ae462b4aea4cff74180f009bdb27ff722ca8bb" gracePeriod=2 Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.313114 4890 generic.go:334] "Generic (PLEG): container finished" podID="45093202-44a4-400f-9f69-fe2f408aba3c" containerID="7b1091e87f1242fe364df32b45ae462b4aea4cff74180f009bdb27ff722ca8bb" exitCode=0 Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.313173 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9k97" event={"ID":"45093202-44a4-400f-9f69-fe2f408aba3c","Type":"ContainerDied","Data":"7b1091e87f1242fe364df32b45ae462b4aea4cff74180f009bdb27ff722ca8bb"} Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.492400 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.578061 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-utilities\") pod \"45093202-44a4-400f-9f69-fe2f408aba3c\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.578232 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2bj2\" (UniqueName: \"kubernetes.io/projected/45093202-44a4-400f-9f69-fe2f408aba3c-kube-api-access-j2bj2\") pod \"45093202-44a4-400f-9f69-fe2f408aba3c\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.578259 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-catalog-content\") pod \"45093202-44a4-400f-9f69-fe2f408aba3c\" (UID: \"45093202-44a4-400f-9f69-fe2f408aba3c\") " Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.579287 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-utilities" (OuterVolumeSpecName: "utilities") pod "45093202-44a4-400f-9f69-fe2f408aba3c" (UID: "45093202-44a4-400f-9f69-fe2f408aba3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.584039 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45093202-44a4-400f-9f69-fe2f408aba3c-kube-api-access-j2bj2" (OuterVolumeSpecName: "kube-api-access-j2bj2") pod "45093202-44a4-400f-9f69-fe2f408aba3c" (UID: "45093202-44a4-400f-9f69-fe2f408aba3c"). InnerVolumeSpecName "kube-api-access-j2bj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.628879 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45093202-44a4-400f-9f69-fe2f408aba3c" (UID: "45093202-44a4-400f-9f69-fe2f408aba3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.680460 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2bj2\" (UniqueName: \"kubernetes.io/projected/45093202-44a4-400f-9f69-fe2f408aba3c-kube-api-access-j2bj2\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.680511 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:32 crc kubenswrapper[4890]: I0121 15:58:32.680525 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45093202-44a4-400f-9f69-fe2f408aba3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.323578 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9k97" event={"ID":"45093202-44a4-400f-9f69-fe2f408aba3c","Type":"ContainerDied","Data":"5d2331d64c25a0b1b6f691e139ee92aaa184e6b46258317396b55fcfb4dedcd8"} Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.323635 4890 scope.go:117] "RemoveContainer" containerID="7b1091e87f1242fe364df32b45ae462b4aea4cff74180f009bdb27ff722ca8bb" Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.323651 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9k97" Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.346413 4890 scope.go:117] "RemoveContainer" containerID="e3e8690ec52dc7a14755d399327bad9ffbeaf4297cc93dca27ed0a28a76499df" Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.355122 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9k97"] Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.361745 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h9k97"] Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.383057 4890 scope.go:117] "RemoveContainer" containerID="a7de57ef74f2516f9da45fb3e1695679cba8e3d14b11edb45f9daa2a6d082981" Jan 21 15:58:33 crc kubenswrapper[4890]: I0121 15:58:33.925090 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" path="/var/lib/kubelet/pods/45093202-44a4-400f-9f69-fe2f408aba3c/volumes" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.227280 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fv444"] Jan 21 15:58:39 crc kubenswrapper[4890]: E0121 15:58:39.227928 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="extract-content" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.227944 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="extract-content" Jan 21 15:58:39 crc kubenswrapper[4890]: E0121 15:58:39.227958 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="registry-server" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.227965 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="registry-server" Jan 21 15:58:39 crc kubenswrapper[4890]: E0121 15:58:39.227978 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="extract-utilities" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.227985 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="extract-utilities" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.228180 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="45093202-44a4-400f-9f69-fe2f408aba3c" containerName="registry-server" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.229341 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.241923 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fv444"] Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.301784 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-utilities\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.301881 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-catalog-content\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.302019 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55twb\" (UniqueName: \"kubernetes.io/projected/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-kube-api-access-55twb\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.402984 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-utilities\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.403069 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-catalog-content\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.403112 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55twb\" (UniqueName: \"kubernetes.io/projected/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-kube-api-access-55twb\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.403726 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-utilities\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.403805 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-catalog-content\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.426500 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55twb\" (UniqueName: \"kubernetes.io/projected/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-kube-api-access-55twb\") pod \"community-operators-fv444\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:39 crc kubenswrapper[4890]: I0121 15:58:39.546291 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:40 crc kubenswrapper[4890]: I0121 15:58:40.062063 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fv444"] Jan 21 15:58:40 crc kubenswrapper[4890]: I0121 15:58:40.402597 4890 generic.go:334] "Generic (PLEG): container finished" podID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerID="673762812a0a69e6ce72b778a987c44f4b4e12d5a062e2272fd8018093da9fe3" exitCode=0 Jan 21 15:58:40 crc kubenswrapper[4890]: I0121 15:58:40.402862 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fv444" event={"ID":"7c886542-c12f-4eaa-800b-c7ba1b40f2e0","Type":"ContainerDied","Data":"673762812a0a69e6ce72b778a987c44f4b4e12d5a062e2272fd8018093da9fe3"} Jan 21 15:58:40 crc kubenswrapper[4890]: I0121 15:58:40.402887 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fv444" event={"ID":"7c886542-c12f-4eaa-800b-c7ba1b40f2e0","Type":"ContainerStarted","Data":"1f31838f76338b42eff23c42b58e26ec6dc1b22d887e6d22365e0b9ac744a930"} Jan 21 15:58:41 crc kubenswrapper[4890]: I0121 15:58:41.413599 4890 generic.go:334] "Generic (PLEG): container finished" podID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerID="215a6d8ce3c210db686f18e63808187a7bd502ec01d5cf9047340d05c1f33af1" exitCode=0 Jan 21 15:58:41 crc kubenswrapper[4890]: I0121 15:58:41.413652 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fv444" event={"ID":"7c886542-c12f-4eaa-800b-c7ba1b40f2e0","Type":"ContainerDied","Data":"215a6d8ce3c210db686f18e63808187a7bd502ec01d5cf9047340d05c1f33af1"} Jan 21 15:58:42 crc kubenswrapper[4890]: I0121 15:58:42.424962 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fv444" event={"ID":"7c886542-c12f-4eaa-800b-c7ba1b40f2e0","Type":"ContainerStarted","Data":"57df6d38d763f52d0c0b88935902257693b2e2e13a48fb800c7b86a2064a9dd7"} Jan 21 15:58:42 crc kubenswrapper[4890]: I0121 15:58:42.450089 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fv444" podStartSLOduration=1.8411259439999998 podStartE2EDuration="3.45006529s" podCreationTimestamp="2026-01-21 15:58:39 +0000 UTC" firstStartedPulling="2026-01-21 15:58:40.40413101 +0000 UTC m=+1602.765573419" lastFinishedPulling="2026-01-21 15:58:42.013070356 +0000 UTC m=+1604.374512765" observedRunningTime="2026-01-21 15:58:42.443365443 +0000 UTC m=+1604.804807852" watchObservedRunningTime="2026-01-21 15:58:42.45006529 +0000 UTC m=+1604.811507709" Jan 21 15:58:48 crc kubenswrapper[4890]: I0121 15:58:48.762315 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:58:48 crc kubenswrapper[4890]: I0121 15:58:48.762900 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:58:49 crc kubenswrapper[4890]: I0121 15:58:49.547557 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:49 crc kubenswrapper[4890]: I0121 15:58:49.547611 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:49 crc kubenswrapper[4890]: I0121 15:58:49.597429 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:50 crc kubenswrapper[4890]: I0121 15:58:50.521704 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:50 crc kubenswrapper[4890]: I0121 15:58:50.564870 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fv444"] Jan 21 15:58:52 crc kubenswrapper[4890]: I0121 15:58:52.492331 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fv444" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="registry-server" containerID="cri-o://57df6d38d763f52d0c0b88935902257693b2e2e13a48fb800c7b86a2064a9dd7" gracePeriod=2 Jan 21 15:58:55 crc kubenswrapper[4890]: I0121 15:58:55.521669 4890 generic.go:334] "Generic (PLEG): container finished" podID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerID="57df6d38d763f52d0c0b88935902257693b2e2e13a48fb800c7b86a2064a9dd7" exitCode=0 Jan 21 15:58:55 crc kubenswrapper[4890]: I0121 15:58:55.521749 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fv444" event={"ID":"7c886542-c12f-4eaa-800b-c7ba1b40f2e0","Type":"ContainerDied","Data":"57df6d38d763f52d0c0b88935902257693b2e2e13a48fb800c7b86a2064a9dd7"} Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.005269 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.087279 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55twb\" (UniqueName: \"kubernetes.io/projected/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-kube-api-access-55twb\") pod \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.087375 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-utilities\") pod \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.087492 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-catalog-content\") pod \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\" (UID: \"7c886542-c12f-4eaa-800b-c7ba1b40f2e0\") " Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.088741 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-utilities" (OuterVolumeSpecName: "utilities") pod "7c886542-c12f-4eaa-800b-c7ba1b40f2e0" (UID: "7c886542-c12f-4eaa-800b-c7ba1b40f2e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.101999 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-kube-api-access-55twb" (OuterVolumeSpecName: "kube-api-access-55twb") pod "7c886542-c12f-4eaa-800b-c7ba1b40f2e0" (UID: "7c886542-c12f-4eaa-800b-c7ba1b40f2e0"). InnerVolumeSpecName "kube-api-access-55twb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.151549 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c886542-c12f-4eaa-800b-c7ba1b40f2e0" (UID: "7c886542-c12f-4eaa-800b-c7ba1b40f2e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.188974 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.189018 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.189032 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55twb\" (UniqueName: \"kubernetes.io/projected/7c886542-c12f-4eaa-800b-c7ba1b40f2e0-kube-api-access-55twb\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.533821 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fv444" event={"ID":"7c886542-c12f-4eaa-800b-c7ba1b40f2e0","Type":"ContainerDied","Data":"1f31838f76338b42eff23c42b58e26ec6dc1b22d887e6d22365e0b9ac744a930"} Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.533881 4890 scope.go:117] "RemoveContainer" containerID="57df6d38d763f52d0c0b88935902257693b2e2e13a48fb800c7b86a2064a9dd7" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.533938 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fv444" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.563235 4890 scope.go:117] "RemoveContainer" containerID="215a6d8ce3c210db686f18e63808187a7bd502ec01d5cf9047340d05c1f33af1" Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.579287 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fv444"] Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.587410 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fv444"] Jan 21 15:58:56 crc kubenswrapper[4890]: I0121 15:58:56.589168 4890 scope.go:117] "RemoveContainer" containerID="673762812a0a69e6ce72b778a987c44f4b4e12d5a062e2272fd8018093da9fe3" Jan 21 15:58:57 crc kubenswrapper[4890]: I0121 15:58:57.922327 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" path="/var/lib/kubelet/pods/7c886542-c12f-4eaa-800b-c7ba1b40f2e0/volumes" Jan 21 15:59:15 crc kubenswrapper[4890]: I0121 15:59:15.883593 4890 scope.go:117] "RemoveContainer" containerID="e1a79afa342facaff6b7dba587375b37a5b33902571feffefe046acbe30019ad" Jan 21 15:59:15 crc kubenswrapper[4890]: I0121 15:59:15.904826 4890 scope.go:117] "RemoveContainer" containerID="489037191e7d74a2730eac1c46abc09d34fce2781e436638fcd47291281cfd30" Jan 21 15:59:15 crc kubenswrapper[4890]: I0121 15:59:15.925832 4890 scope.go:117] "RemoveContainer" containerID="de33dfbf0dbdc2111900a2a70b6bed546d48c92feb4b8907baecfaea0ab70977" Jan 21 15:59:15 crc kubenswrapper[4890]: I0121 15:59:15.942674 4890 scope.go:117] "RemoveContainer" containerID="37c37359b700ae7dab497ac93d747b6bf79f54edc4ddd64a222b2e24e26b0e48" Jan 21 15:59:15 crc kubenswrapper[4890]: I0121 15:59:15.977766 4890 scope.go:117] "RemoveContainer" containerID="8288419be0984195f05e824990ff0b35010407e8281682f9c40f8f53106793d8" Jan 21 15:59:15 crc kubenswrapper[4890]: I0121 15:59:15.997371 4890 scope.go:117] "RemoveContainer" containerID="bff60feda0b6764dfca45482723eee1a185f385fe89ec7e4db2d7b58ad76e34b" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.028124 4890 scope.go:117] "RemoveContainer" containerID="fe578b5a9120cabe848837eb1fe2b519560f4c2d26dee5d0a871fa17df5e83f7" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.064991 4890 scope.go:117] "RemoveContainer" containerID="eaf7230cafb909a1fed57ee77b4de32797dc9fca4fefd4c6cbb0781439b57e03" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.099570 4890 scope.go:117] "RemoveContainer" containerID="0d51c4e84ed9dc0d609610989bd5d3017dd5dceb256f192563ab877302776690" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.120887 4890 scope.go:117] "RemoveContainer" containerID="8a600df015f1f8de1958bab6bd14cbbafa4e20cc6adeaa682684122d5cd783ab" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.151719 4890 scope.go:117] "RemoveContainer" containerID="ce09e5c3848074e56c8f1304c5eeb0f39e955c23e11561464cb386c01b9d2fe2" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.200602 4890 scope.go:117] "RemoveContainer" containerID="05401006a7a2ad8b1aa8b60e1077170d3fc35d0dc28d5fca2fa129ee967173b5" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.218543 4890 scope.go:117] "RemoveContainer" containerID="290fcae6cfffbec443f9447fbac0c1272c7474f5ad004686e58a539395e8ba3d" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.246926 4890 scope.go:117] "RemoveContainer" containerID="3228c1d75258cedf57a5135ce6d6d2bc6c7abf0865a062cab2068cc01ef96f79" Jan 21 15:59:16 crc kubenswrapper[4890]: I0121 15:59:16.271800 4890 scope.go:117] "RemoveContainer" containerID="d9f68bc7764f75ac9e2f265b029157c523efe523b7ad9fb1d218658e82fd95fe" Jan 21 15:59:18 crc kubenswrapper[4890]: I0121 15:59:18.762268 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:59:18 crc kubenswrapper[4890]: I0121 15:59:18.762368 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.762157 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.762774 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.762823 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.763468 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.763531 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" gracePeriod=600 Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.926639 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" exitCode=0 Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.926728 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c"} Jan 21 15:59:48 crc kubenswrapper[4890]: I0121 15:59:48.927030 4890 scope.go:117] "RemoveContainer" containerID="6c1674a2bd424bd7189f15c6273406528477da9f8b31d68e03fb7356078df89f" Jan 21 15:59:49 crc kubenswrapper[4890]: E0121 15:59:49.387679 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 15:59:49 crc kubenswrapper[4890]: I0121 15:59:49.952636 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 15:59:49 crc kubenswrapper[4890]: E0121 15:59:49.952919 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.141369 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx"] Jan 21 16:00:00 crc kubenswrapper[4890]: E0121 16:00:00.142038 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="extract-content" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.142056 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="extract-content" Jan 21 16:00:00 crc kubenswrapper[4890]: E0121 16:00:00.142066 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="registry-server" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.142073 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="registry-server" Jan 21 16:00:00 crc kubenswrapper[4890]: E0121 16:00:00.142108 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="extract-utilities" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.142118 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="extract-utilities" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.142338 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c886542-c12f-4eaa-800b-c7ba1b40f2e0" containerName="registry-server" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.142905 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.145505 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.145869 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.154824 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx"] Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.293089 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-config-volume\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.293329 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-secret-volume\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.293416 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xsm\" (UniqueName: \"kubernetes.io/projected/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-kube-api-access-n2xsm\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.394201 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-secret-volume\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.394279 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xsm\" (UniqueName: \"kubernetes.io/projected/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-kube-api-access-n2xsm\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.394383 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-config-volume\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.395385 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-config-volume\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.402098 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-secret-volume\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.412620 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xsm\" (UniqueName: \"kubernetes.io/projected/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-kube-api-access-n2xsm\") pod \"collect-profiles-29483520-bjqvx\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.465920 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:00 crc kubenswrapper[4890]: I0121 16:00:00.881412 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx"] Jan 21 16:00:01 crc kubenswrapper[4890]: I0121 16:00:01.028197 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" event={"ID":"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae","Type":"ContainerStarted","Data":"5443b9f5cb279bc1755cbf5f79fa2a3b0bdd25fb7484a56fe7cba2280f534267"} Jan 21 16:00:02 crc kubenswrapper[4890]: I0121 16:00:02.038660 4890 generic.go:334] "Generic (PLEG): container finished" podID="f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" containerID="ae10d6a2bde58e1fa209819f6160516f26a9ff0a5b2330688735e0d91cfe4cbc" exitCode=0 Jan 21 16:00:02 crc kubenswrapper[4890]: I0121 16:00:02.038720 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" event={"ID":"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae","Type":"ContainerDied","Data":"ae10d6a2bde58e1fa209819f6160516f26a9ff0a5b2330688735e0d91cfe4cbc"} Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.292713 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.436616 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-config-volume\") pod \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.436950 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-secret-volume\") pod \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.437078 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xsm\" (UniqueName: \"kubernetes.io/projected/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-kube-api-access-n2xsm\") pod \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\" (UID: \"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae\") " Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.437569 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" (UID: "f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.441182 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" (UID: "f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.447634 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-kube-api-access-n2xsm" (OuterVolumeSpecName: "kube-api-access-n2xsm") pod "f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" (UID: "f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae"). InnerVolumeSpecName "kube-api-access-n2xsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.539124 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.539161 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4890]: I0121 16:00:03.539172 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xsm\" (UniqueName: \"kubernetes.io/projected/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae-kube-api-access-n2xsm\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4890]: I0121 16:00:04.055943 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" event={"ID":"f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae","Type":"ContainerDied","Data":"5443b9f5cb279bc1755cbf5f79fa2a3b0bdd25fb7484a56fe7cba2280f534267"} Jan 21 16:00:04 crc kubenswrapper[4890]: I0121 16:00:04.056010 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5443b9f5cb279bc1755cbf5f79fa2a3b0bdd25fb7484a56fe7cba2280f534267" Jan 21 16:00:04 crc kubenswrapper[4890]: I0121 16:00:04.055977 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx" Jan 21 16:00:04 crc kubenswrapper[4890]: I0121 16:00:04.914967 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:00:04 crc kubenswrapper[4890]: E0121 16:00:04.915311 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.485899 4890 scope.go:117] "RemoveContainer" containerID="17017fb4db752be398957128e72379f1e6bbd55f2c985855c266996c3fbae23f" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.504532 4890 scope.go:117] "RemoveContainer" containerID="900fd7400b1809198eec2f87e30f7758ce7b4277e57b7f022112ff25e938935f" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.528953 4890 scope.go:117] "RemoveContainer" containerID="ac358f25d3bc11ecfd3d8286ee71238981958d5ba551cfdc752cc98b87178c26" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.546939 4890 scope.go:117] "RemoveContainer" containerID="c851675434b92477b334529677e77e7111d37132ce27fc8672bf63399697507c" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.585711 4890 scope.go:117] "RemoveContainer" containerID="99a269376f3d61532e81882e440bd69f0806de9201603b30a97c15670bbae6d5" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.602790 4890 scope.go:117] "RemoveContainer" containerID="f7af32f3b549ff9c597f62cbdae56ad477fa8bc3a6f8183f6ee62dcfb55b8bba" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.626663 4890 scope.go:117] "RemoveContainer" containerID="4a223e2232d09a7902cafe5997f0744b43b30ca16b7805665ca1778aa131272b" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.640229 4890 scope.go:117] "RemoveContainer" containerID="00f82df8616c87c0912d9dbe1f4399e94dfccbba1ce9e11654e84e00c004fd71" Jan 21 16:00:16 crc kubenswrapper[4890]: I0121 16:00:16.913674 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:00:16 crc kubenswrapper[4890]: E0121 16:00:16.913900 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:00:31 crc kubenswrapper[4890]: I0121 16:00:31.914984 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:00:31 crc kubenswrapper[4890]: E0121 16:00:31.915742 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:00:45 crc kubenswrapper[4890]: I0121 16:00:45.913973 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:00:45 crc kubenswrapper[4890]: E0121 16:00:45.914600 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:01:00 crc kubenswrapper[4890]: I0121 16:01:00.914526 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:01:00 crc kubenswrapper[4890]: E0121 16:01:00.915313 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:01:12 crc kubenswrapper[4890]: I0121 16:01:12.914572 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:01:12 crc kubenswrapper[4890]: E0121 16:01:12.915295 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.724104 4890 scope.go:117] "RemoveContainer" containerID="4f66fb67fe1dced259cb614ff78069ee18ae227a09963d6bdd3e2e7adb0e8d72" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.751554 4890 scope.go:117] "RemoveContainer" containerID="3cbd1e381f8a05f81b896d527f93626ca4b45dcad610a582cdfcabb31938696c" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.803237 4890 scope.go:117] "RemoveContainer" containerID="745ce70403665468b313680739c5b2b32b49b16f488cddbad84df4f2edafcc2c" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.818807 4890 scope.go:117] "RemoveContainer" containerID="6772a790cec8c1317eefe66fb08e48511477a083656c68be868332c54d81cfd3" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.842015 4890 scope.go:117] "RemoveContainer" containerID="ac959b1ac528e4de2e66685bb7abda5d333740849f8b9ca6b1161716bbc68588" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.897814 4890 scope.go:117] "RemoveContainer" containerID="11db58464c557dc77655a8fd1850c22413732365a8fefd05581911c35155a41d" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.917402 4890 scope.go:117] "RemoveContainer" containerID="10c72ed0b55323e6e81eb28fdd4bd49dbab1b1a9fb1044a4284d618c6d81f405" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.953941 4890 scope.go:117] "RemoveContainer" containerID="f1c1f17d85bf8bb53eef6757f0bd48539a6d6eae586a8fb378144b48d4a6d1e0" Jan 21 16:01:16 crc kubenswrapper[4890]: I0121 16:01:16.971369 4890 scope.go:117] "RemoveContainer" containerID="b97e050ad2a00d179937618daed6f81f2c163cafb139936cbf8b07d6cb0ad28f" Jan 21 16:01:26 crc kubenswrapper[4890]: I0121 16:01:26.913796 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:01:26 crc kubenswrapper[4890]: E0121 16:01:26.914654 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:01:37 crc kubenswrapper[4890]: I0121 16:01:37.919525 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:01:37 crc kubenswrapper[4890]: E0121 16:01:37.920503 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:01:49 crc kubenswrapper[4890]: I0121 16:01:49.914440 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:01:49 crc kubenswrapper[4890]: E0121 16:01:49.915780 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:02:01 crc kubenswrapper[4890]: I0121 16:02:01.914838 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:02:01 crc kubenswrapper[4890]: E0121 16:02:01.915763 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:02:14 crc kubenswrapper[4890]: I0121 16:02:14.913712 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:02:14 crc kubenswrapper[4890]: E0121 16:02:14.914483 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:02:17 crc kubenswrapper[4890]: I0121 16:02:17.133533 4890 scope.go:117] "RemoveContainer" containerID="7b18fb00e90fa86f50f2bfba1494e010b64f7782eb0d47b4b7973db36cc104b3" Jan 21 16:02:25 crc kubenswrapper[4890]: I0121 16:02:25.914118 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:02:25 crc kubenswrapper[4890]: E0121 16:02:25.914909 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:02:40 crc kubenswrapper[4890]: I0121 16:02:40.914898 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:02:40 crc kubenswrapper[4890]: E0121 16:02:40.915650 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:02:51 crc kubenswrapper[4890]: I0121 16:02:51.913876 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:02:51 crc kubenswrapper[4890]: E0121 16:02:51.914497 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:03:04 crc kubenswrapper[4890]: I0121 16:03:04.914982 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:03:04 crc kubenswrapper[4890]: E0121 16:03:04.916495 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:03:19 crc kubenswrapper[4890]: I0121 16:03:19.914875 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:03:19 crc kubenswrapper[4890]: E0121 16:03:19.915587 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:03:34 crc kubenswrapper[4890]: I0121 16:03:34.914050 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:03:34 crc kubenswrapper[4890]: E0121 16:03:34.914867 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:03:48 crc kubenswrapper[4890]: I0121 16:03:48.914567 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:03:48 crc kubenswrapper[4890]: E0121 16:03:48.915955 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:03:59 crc kubenswrapper[4890]: I0121 16:03:59.914684 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:03:59 crc kubenswrapper[4890]: E0121 16:03:59.915179 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:04:13 crc kubenswrapper[4890]: I0121 16:04:13.914195 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:04:13 crc kubenswrapper[4890]: E0121 16:04:13.915220 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:04:17 crc kubenswrapper[4890]: I0121 16:04:17.211302 4890 scope.go:117] "RemoveContainer" containerID="f7f1d19686fd76535516f298f8d65b15e075b8bf2ee3d555741f179ed67a65c8" Jan 21 16:04:17 crc kubenswrapper[4890]: I0121 16:04:17.229328 4890 scope.go:117] "RemoveContainer" containerID="67d5ddc4341ec5d39058b797c1a1ee00ef37e769ed8a91f45483efd6b188d11e" Jan 21 16:04:17 crc kubenswrapper[4890]: I0121 16:04:17.256539 4890 scope.go:117] "RemoveContainer" containerID="94368036921e95f83bd16f5e357d5d672c24f094de719293c55e312da7551306" Jan 21 16:04:25 crc kubenswrapper[4890]: I0121 16:04:25.914313 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:04:25 crc kubenswrapper[4890]: E0121 16:04:25.915207 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.266799 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-859db"] Jan 21 16:04:29 crc kubenswrapper[4890]: E0121 16:04:29.267509 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" containerName="collect-profiles" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.267527 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" containerName="collect-profiles" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.267904 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" containerName="collect-profiles" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.269041 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.285580 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-859db"] Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.387872 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-utilities\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.387929 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-catalog-content\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.387975 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lrc\" (UniqueName: \"kubernetes.io/projected/1d70ead0-12c4-4645-a9a8-274bf311256e-kube-api-access-j9lrc\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.489311 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-catalog-content\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.489389 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lrc\" (UniqueName: \"kubernetes.io/projected/1d70ead0-12c4-4645-a9a8-274bf311256e-kube-api-access-j9lrc\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.489487 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-utilities\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.489918 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-utilities\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.490010 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-catalog-content\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.513551 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lrc\" (UniqueName: \"kubernetes.io/projected/1d70ead0-12c4-4645-a9a8-274bf311256e-kube-api-access-j9lrc\") pod \"redhat-operators-859db\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:29 crc kubenswrapper[4890]: I0121 16:04:29.589376 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:30 crc kubenswrapper[4890]: I0121 16:04:30.051120 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-859db"] Jan 21 16:04:30 crc kubenswrapper[4890]: I0121 16:04:30.930919 4890 generic.go:334] "Generic (PLEG): container finished" podID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerID="1074a2659afb7f37c928f73ae3a567e46fc4039bf48af75b2f23ed8d9ad70116" exitCode=0 Jan 21 16:04:30 crc kubenswrapper[4890]: I0121 16:04:30.931031 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859db" event={"ID":"1d70ead0-12c4-4645-a9a8-274bf311256e","Type":"ContainerDied","Data":"1074a2659afb7f37c928f73ae3a567e46fc4039bf48af75b2f23ed8d9ad70116"} Jan 21 16:04:30 crc kubenswrapper[4890]: I0121 16:04:30.931542 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859db" event={"ID":"1d70ead0-12c4-4645-a9a8-274bf311256e","Type":"ContainerStarted","Data":"76604535c475c85dc4bacaf8c272dad05e86c0a288cc98dea33f569b8071528a"} Jan 21 16:04:30 crc kubenswrapper[4890]: I0121 16:04:30.933419 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:04:33 crc kubenswrapper[4890]: I0121 16:04:33.949046 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859db" event={"ID":"1d70ead0-12c4-4645-a9a8-274bf311256e","Type":"ContainerStarted","Data":"8307339991b1f0c7d0300f999a826ab00791aeb054fd041a6eebadbb42da9399"} Jan 21 16:04:35 crc kubenswrapper[4890]: I0121 16:04:35.964746 4890 generic.go:334] "Generic (PLEG): container finished" podID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerID="8307339991b1f0c7d0300f999a826ab00791aeb054fd041a6eebadbb42da9399" exitCode=0 Jan 21 16:04:35 crc kubenswrapper[4890]: I0121 16:04:35.964867 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859db" event={"ID":"1d70ead0-12c4-4645-a9a8-274bf311256e","Type":"ContainerDied","Data":"8307339991b1f0c7d0300f999a826ab00791aeb054fd041a6eebadbb42da9399"} Jan 21 16:04:36 crc kubenswrapper[4890]: I0121 16:04:36.914954 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:04:36 crc kubenswrapper[4890]: E0121 16:04:36.916130 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:04:37 crc kubenswrapper[4890]: I0121 16:04:37.980754 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859db" event={"ID":"1d70ead0-12c4-4645-a9a8-274bf311256e","Type":"ContainerStarted","Data":"f849e46fb597eec9fc14dcf2aa680b921d328e9e34786339bf245d0ec73fe657"} Jan 21 16:04:39 crc kubenswrapper[4890]: I0121 16:04:39.009857 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-859db" podStartSLOduration=3.62318265 podStartE2EDuration="10.009828006s" podCreationTimestamp="2026-01-21 16:04:29 +0000 UTC" firstStartedPulling="2026-01-21 16:04:30.932994577 +0000 UTC m=+1953.294436986" lastFinishedPulling="2026-01-21 16:04:37.319639933 +0000 UTC m=+1959.681082342" observedRunningTime="2026-01-21 16:04:39.004912904 +0000 UTC m=+1961.366355313" watchObservedRunningTime="2026-01-21 16:04:39.009828006 +0000 UTC m=+1961.371270425" Jan 21 16:04:39 crc kubenswrapper[4890]: I0121 16:04:39.589973 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:39 crc kubenswrapper[4890]: I0121 16:04:39.590054 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:40 crc kubenswrapper[4890]: I0121 16:04:40.634183 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-859db" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="registry-server" probeResult="failure" output=< Jan 21 16:04:40 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 16:04:40 crc kubenswrapper[4890]: > Jan 21 16:04:49 crc kubenswrapper[4890]: I0121 16:04:49.636180 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:49 crc kubenswrapper[4890]: I0121 16:04:49.685644 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:49 crc kubenswrapper[4890]: I0121 16:04:49.872802 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-859db"] Jan 21 16:04:50 crc kubenswrapper[4890]: I0121 16:04:50.914394 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:04:51 crc kubenswrapper[4890]: I0121 16:04:51.070219 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-859db" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="registry-server" containerID="cri-o://f849e46fb597eec9fc14dcf2aa680b921d328e9e34786339bf245d0ec73fe657" gracePeriod=2 Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.102318 4890 generic.go:334] "Generic (PLEG): container finished" podID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerID="f849e46fb597eec9fc14dcf2aa680b921d328e9e34786339bf245d0ec73fe657" exitCode=0 Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.102491 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859db" event={"ID":"1d70ead0-12c4-4645-a9a8-274bf311256e","Type":"ContainerDied","Data":"f849e46fb597eec9fc14dcf2aa680b921d328e9e34786339bf245d0ec73fe657"} Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.110072 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"712ab443c28dddd67de6bfe92e08fe5c10c84c70c53493bc8ab47fee197a105e"} Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.209642 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.359924 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-catalog-content\") pod \"1d70ead0-12c4-4645-a9a8-274bf311256e\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.360040 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-utilities\") pod \"1d70ead0-12c4-4645-a9a8-274bf311256e\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.360174 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9lrc\" (UniqueName: \"kubernetes.io/projected/1d70ead0-12c4-4645-a9a8-274bf311256e-kube-api-access-j9lrc\") pod \"1d70ead0-12c4-4645-a9a8-274bf311256e\" (UID: \"1d70ead0-12c4-4645-a9a8-274bf311256e\") " Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.361560 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-utilities" (OuterVolumeSpecName: "utilities") pod "1d70ead0-12c4-4645-a9a8-274bf311256e" (UID: "1d70ead0-12c4-4645-a9a8-274bf311256e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.366590 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d70ead0-12c4-4645-a9a8-274bf311256e-kube-api-access-j9lrc" (OuterVolumeSpecName: "kube-api-access-j9lrc") pod "1d70ead0-12c4-4645-a9a8-274bf311256e" (UID: "1d70ead0-12c4-4645-a9a8-274bf311256e"). InnerVolumeSpecName "kube-api-access-j9lrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.462126 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.462161 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9lrc\" (UniqueName: \"kubernetes.io/projected/1d70ead0-12c4-4645-a9a8-274bf311256e-kube-api-access-j9lrc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.478883 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d70ead0-12c4-4645-a9a8-274bf311256e" (UID: "1d70ead0-12c4-4645-a9a8-274bf311256e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4890]: I0121 16:04:55.563169 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d70ead0-12c4-4645-a9a8-274bf311256e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4890]: I0121 16:04:56.118048 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859db" event={"ID":"1d70ead0-12c4-4645-a9a8-274bf311256e","Type":"ContainerDied","Data":"76604535c475c85dc4bacaf8c272dad05e86c0a288cc98dea33f569b8071528a"} Jan 21 16:04:56 crc kubenswrapper[4890]: I0121 16:04:56.118117 4890 scope.go:117] "RemoveContainer" containerID="f849e46fb597eec9fc14dcf2aa680b921d328e9e34786339bf245d0ec73fe657" Jan 21 16:04:56 crc kubenswrapper[4890]: I0121 16:04:56.118977 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859db" Jan 21 16:04:56 crc kubenswrapper[4890]: I0121 16:04:56.140975 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-859db"] Jan 21 16:04:56 crc kubenswrapper[4890]: I0121 16:04:56.145135 4890 scope.go:117] "RemoveContainer" containerID="8307339991b1f0c7d0300f999a826ab00791aeb054fd041a6eebadbb42da9399" Jan 21 16:04:56 crc kubenswrapper[4890]: I0121 16:04:56.149603 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-859db"] Jan 21 16:04:56 crc kubenswrapper[4890]: I0121 16:04:56.167421 4890 scope.go:117] "RemoveContainer" containerID="1074a2659afb7f37c928f73ae3a567e46fc4039bf48af75b2f23ed8d9ad70116" Jan 21 16:04:57 crc kubenswrapper[4890]: I0121 16:04:57.933057 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" path="/var/lib/kubelet/pods/1d70ead0-12c4-4645-a9a8-274bf311256e/volumes" Jan 21 16:07:18 crc kubenswrapper[4890]: I0121 16:07:18.762032 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:07:18 crc kubenswrapper[4890]: I0121 16:07:18.762701 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:07:48 crc kubenswrapper[4890]: I0121 16:07:48.762610 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:07:48 crc kubenswrapper[4890]: I0121 16:07:48.763721 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:08:18 crc kubenswrapper[4890]: I0121 16:08:18.762695 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:08:18 crc kubenswrapper[4890]: I0121 16:08:18.764268 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:08:18 crc kubenswrapper[4890]: I0121 16:08:18.764404 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:08:18 crc kubenswrapper[4890]: I0121 16:08:18.765122 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"712ab443c28dddd67de6bfe92e08fe5c10c84c70c53493bc8ab47fee197a105e"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:08:18 crc kubenswrapper[4890]: I0121 16:08:18.765288 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://712ab443c28dddd67de6bfe92e08fe5c10c84c70c53493bc8ab47fee197a105e" gracePeriod=600 Jan 21 16:08:19 crc kubenswrapper[4890]: I0121 16:08:19.531187 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="712ab443c28dddd67de6bfe92e08fe5c10c84c70c53493bc8ab47fee197a105e" exitCode=0 Jan 21 16:08:19 crc kubenswrapper[4890]: I0121 16:08:19.531253 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"712ab443c28dddd67de6bfe92e08fe5c10c84c70c53493bc8ab47fee197a105e"} Jan 21 16:08:19 crc kubenswrapper[4890]: I0121 16:08:19.531651 4890 scope.go:117] "RemoveContainer" containerID="c79457015f6546d209e0639bb850afbcda0d0ad4b2d01109b4ce313b7977e91c" Jan 21 16:08:20 crc kubenswrapper[4890]: I0121 16:08:20.540510 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063"} Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.000925 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jvqdq"] Jan 21 16:08:30 crc kubenswrapper[4890]: E0121 16:08:30.001889 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="extract-utilities" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.001908 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="extract-utilities" Jan 21 16:08:30 crc kubenswrapper[4890]: E0121 16:08:30.001925 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="registry-server" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.001933 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="registry-server" Jan 21 16:08:30 crc kubenswrapper[4890]: E0121 16:08:30.001955 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="extract-content" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.001966 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="extract-content" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.002129 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d70ead0-12c4-4645-a9a8-274bf311256e" containerName="registry-server" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.003440 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.020289 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvqdq"] Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.093912 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-utilities\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.094018 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-catalog-content\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.094068 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfnln\" (UniqueName: \"kubernetes.io/projected/ecde2529-e479-46f5-9353-933279300d29-kube-api-access-qfnln\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.195033 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-catalog-content\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.195105 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfnln\" (UniqueName: \"kubernetes.io/projected/ecde2529-e479-46f5-9353-933279300d29-kube-api-access-qfnln\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.195158 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-utilities\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.195619 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-catalog-content\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.195667 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-utilities\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.219212 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfnln\" (UniqueName: \"kubernetes.io/projected/ecde2529-e479-46f5-9353-933279300d29-kube-api-access-qfnln\") pod \"redhat-marketplace-jvqdq\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.320997 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:30 crc kubenswrapper[4890]: I0121 16:08:30.754537 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvqdq"] Jan 21 16:08:31 crc kubenswrapper[4890]: I0121 16:08:31.615698 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvqdq" event={"ID":"ecde2529-e479-46f5-9353-933279300d29","Type":"ContainerStarted","Data":"ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e"} Jan 21 16:08:31 crc kubenswrapper[4890]: I0121 16:08:31.615753 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvqdq" event={"ID":"ecde2529-e479-46f5-9353-933279300d29","Type":"ContainerStarted","Data":"a395a908cf4191f128afa70bd56800b8b1c3f1fc31659f98b196e35af1466a1b"} Jan 21 16:08:32 crc kubenswrapper[4890]: I0121 16:08:32.622863 4890 generic.go:334] "Generic (PLEG): container finished" podID="ecde2529-e479-46f5-9353-933279300d29" containerID="ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e" exitCode=0 Jan 21 16:08:32 crc kubenswrapper[4890]: I0121 16:08:32.622903 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvqdq" event={"ID":"ecde2529-e479-46f5-9353-933279300d29","Type":"ContainerDied","Data":"ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e"} Jan 21 16:08:35 crc kubenswrapper[4890]: I0121 16:08:35.644463 4890 generic.go:334] "Generic (PLEG): container finished" podID="ecde2529-e479-46f5-9353-933279300d29" containerID="d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1" exitCode=0 Jan 21 16:08:35 crc kubenswrapper[4890]: I0121 16:08:35.644565 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvqdq" event={"ID":"ecde2529-e479-46f5-9353-933279300d29","Type":"ContainerDied","Data":"d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1"} Jan 21 16:08:37 crc kubenswrapper[4890]: I0121 16:08:37.658152 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvqdq" event={"ID":"ecde2529-e479-46f5-9353-933279300d29","Type":"ContainerStarted","Data":"7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a"} Jan 21 16:08:37 crc kubenswrapper[4890]: I0121 16:08:37.680580 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jvqdq" podStartSLOduration=4.153345161 podStartE2EDuration="8.68055988s" podCreationTimestamp="2026-01-21 16:08:29 +0000 UTC" firstStartedPulling="2026-01-21 16:08:32.624950505 +0000 UTC m=+2194.986392914" lastFinishedPulling="2026-01-21 16:08:37.152165224 +0000 UTC m=+2199.513607633" observedRunningTime="2026-01-21 16:08:37.674027997 +0000 UTC m=+2200.035470416" watchObservedRunningTime="2026-01-21 16:08:37.68055988 +0000 UTC m=+2200.042002289" Jan 21 16:08:40 crc kubenswrapper[4890]: I0121 16:08:40.321395 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:40 crc kubenswrapper[4890]: I0121 16:08:40.321832 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:40 crc kubenswrapper[4890]: I0121 16:08:40.371309 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:50 crc kubenswrapper[4890]: I0121 16:08:50.361270 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:50 crc kubenswrapper[4890]: I0121 16:08:50.406172 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvqdq"] Jan 21 16:08:50 crc kubenswrapper[4890]: I0121 16:08:50.763811 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jvqdq" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="registry-server" containerID="cri-o://7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a" gracePeriod=2 Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.664631 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.714523 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-catalog-content\") pod \"ecde2529-e479-46f5-9353-933279300d29\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.714820 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfnln\" (UniqueName: \"kubernetes.io/projected/ecde2529-e479-46f5-9353-933279300d29-kube-api-access-qfnln\") pod \"ecde2529-e479-46f5-9353-933279300d29\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.714911 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-utilities\") pod \"ecde2529-e479-46f5-9353-933279300d29\" (UID: \"ecde2529-e479-46f5-9353-933279300d29\") " Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.716670 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-utilities" (OuterVolumeSpecName: "utilities") pod "ecde2529-e479-46f5-9353-933279300d29" (UID: "ecde2529-e479-46f5-9353-933279300d29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.724691 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecde2529-e479-46f5-9353-933279300d29-kube-api-access-qfnln" (OuterVolumeSpecName: "kube-api-access-qfnln") pod "ecde2529-e479-46f5-9353-933279300d29" (UID: "ecde2529-e479-46f5-9353-933279300d29"). InnerVolumeSpecName "kube-api-access-qfnln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.743155 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecde2529-e479-46f5-9353-933279300d29" (UID: "ecde2529-e479-46f5-9353-933279300d29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.772478 4890 generic.go:334] "Generic (PLEG): container finished" podID="ecde2529-e479-46f5-9353-933279300d29" containerID="7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a" exitCode=0 Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.772528 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvqdq" event={"ID":"ecde2529-e479-46f5-9353-933279300d29","Type":"ContainerDied","Data":"7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a"} Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.772550 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvqdq" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.772574 4890 scope.go:117] "RemoveContainer" containerID="7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.772560 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvqdq" event={"ID":"ecde2529-e479-46f5-9353-933279300d29","Type":"ContainerDied","Data":"a395a908cf4191f128afa70bd56800b8b1c3f1fc31659f98b196e35af1466a1b"} Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.793683 4890 scope.go:117] "RemoveContainer" containerID="d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.805787 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvqdq"] Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.810504 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvqdq"] Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.817290 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfnln\" (UniqueName: \"kubernetes.io/projected/ecde2529-e479-46f5-9353-933279300d29-kube-api-access-qfnln\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.817332 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.817346 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecde2529-e479-46f5-9353-933279300d29-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.830666 4890 scope.go:117] "RemoveContainer" containerID="ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.844925 4890 scope.go:117] "RemoveContainer" containerID="7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a" Jan 21 16:08:51 crc kubenswrapper[4890]: E0121 16:08:51.845452 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a\": container with ID starting with 7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a not found: ID does not exist" containerID="7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.845495 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a"} err="failed to get container status \"7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a\": rpc error: code = NotFound desc = could not find container \"7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a\": container with ID starting with 7234f9f76caadad662ab37714091427e463812eb9d057cb44360d46c3403ab6a not found: ID does not exist" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.845523 4890 scope.go:117] "RemoveContainer" containerID="d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1" Jan 21 16:08:51 crc kubenswrapper[4890]: E0121 16:08:51.845936 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1\": container with ID starting with d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1 not found: ID does not exist" containerID="d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.845974 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1"} err="failed to get container status \"d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1\": rpc error: code = NotFound desc = could not find container \"d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1\": container with ID starting with d38bd5a6768044f803b1c71a2868bf4eca4db82234885b41a60c36549508a6a1 not found: ID does not exist" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.846002 4890 scope.go:117] "RemoveContainer" containerID="ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e" Jan 21 16:08:51 crc kubenswrapper[4890]: E0121 16:08:51.846306 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e\": container with ID starting with ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e not found: ID does not exist" containerID="ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.846459 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e"} err="failed to get container status \"ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e\": rpc error: code = NotFound desc = could not find container \"ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e\": container with ID starting with ff51ba8833eef7d06dacf936fecefe77084ecce25585191b111dc645bb0bd37e not found: ID does not exist" Jan 21 16:08:51 crc kubenswrapper[4890]: I0121 16:08:51.923484 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecde2529-e479-46f5-9353-933279300d29" path="/var/lib/kubelet/pods/ecde2529-e479-46f5-9353-933279300d29/volumes" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.231321 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gmr72"] Jan 21 16:09:00 crc kubenswrapper[4890]: E0121 16:09:00.232101 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="registry-server" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.232113 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="registry-server" Jan 21 16:09:00 crc kubenswrapper[4890]: E0121 16:09:00.232135 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="extract-utilities" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.232141 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="extract-utilities" Jan 21 16:09:00 crc kubenswrapper[4890]: E0121 16:09:00.232159 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="extract-content" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.232165 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="extract-content" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.232291 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecde2529-e479-46f5-9353-933279300d29" containerName="registry-server" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.233456 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.245998 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gmr72"] Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.332818 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-catalog-content\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.333228 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-utilities\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.333425 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfgl\" (UniqueName: \"kubernetes.io/projected/296fbd9b-c3bd-479a-b617-81aff2eecbce-kube-api-access-wsfgl\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.434559 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfgl\" (UniqueName: \"kubernetes.io/projected/296fbd9b-c3bd-479a-b617-81aff2eecbce-kube-api-access-wsfgl\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.434636 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-catalog-content\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.434666 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-utilities\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.435177 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-utilities\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.435457 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-catalog-content\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.463779 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfgl\" (UniqueName: \"kubernetes.io/projected/296fbd9b-c3bd-479a-b617-81aff2eecbce-kube-api-access-wsfgl\") pod \"certified-operators-gmr72\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:00 crc kubenswrapper[4890]: I0121 16:09:00.549878 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:01 crc kubenswrapper[4890]: I0121 16:09:01.012282 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gmr72"] Jan 21 16:09:01 crc kubenswrapper[4890]: I0121 16:09:01.850757 4890 generic.go:334] "Generic (PLEG): container finished" podID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerID="18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a" exitCode=0 Jan 21 16:09:01 crc kubenswrapper[4890]: I0121 16:09:01.850799 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmr72" event={"ID":"296fbd9b-c3bd-479a-b617-81aff2eecbce","Type":"ContainerDied","Data":"18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a"} Jan 21 16:09:01 crc kubenswrapper[4890]: I0121 16:09:01.850987 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmr72" event={"ID":"296fbd9b-c3bd-479a-b617-81aff2eecbce","Type":"ContainerStarted","Data":"199779e3034c253ddda403725276fa7de5233e474b556946619dced9aae3650d"} Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.420211 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hrcvt"] Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.422283 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.435911 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrcvt"] Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.495215 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-catalog-content\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.495297 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zbq\" (UniqueName: \"kubernetes.io/projected/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-kube-api-access-x6zbq\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.495319 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-utilities\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.597097 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-catalog-content\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.597235 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6zbq\" (UniqueName: \"kubernetes.io/projected/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-kube-api-access-x6zbq\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.597263 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-utilities\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.597748 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-catalog-content\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.597885 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-utilities\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.617861 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6zbq\" (UniqueName: \"kubernetes.io/projected/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-kube-api-access-x6zbq\") pod \"community-operators-hrcvt\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:04 crc kubenswrapper[4890]: I0121 16:09:04.755649 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:05 crc kubenswrapper[4890]: W0121 16:09:05.439756 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e6738a4_a64c_41d6_ad44_e2f6aaa1cb55.slice/crio-97b31e9dcb9b06b67e9e85aa7edbb8119c5313e843addd28342fe306b1a86009 WatchSource:0}: Error finding container 97b31e9dcb9b06b67e9e85aa7edbb8119c5313e843addd28342fe306b1a86009: Status 404 returned error can't find the container with id 97b31e9dcb9b06b67e9e85aa7edbb8119c5313e843addd28342fe306b1a86009 Jan 21 16:09:05 crc kubenswrapper[4890]: I0121 16:09:05.460336 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrcvt"] Jan 21 16:09:05 crc kubenswrapper[4890]: I0121 16:09:05.883138 4890 generic.go:334] "Generic (PLEG): container finished" podID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerID="d9f5e0f0f3f27ecf78efeddfe43887f9cd177261f343a2e2770dfff007e974fb" exitCode=0 Jan 21 16:09:05 crc kubenswrapper[4890]: I0121 16:09:05.883207 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcvt" event={"ID":"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55","Type":"ContainerDied","Data":"d9f5e0f0f3f27ecf78efeddfe43887f9cd177261f343a2e2770dfff007e974fb"} Jan 21 16:09:05 crc kubenswrapper[4890]: I0121 16:09:05.883236 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcvt" event={"ID":"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55","Type":"ContainerStarted","Data":"97b31e9dcb9b06b67e9e85aa7edbb8119c5313e843addd28342fe306b1a86009"} Jan 21 16:09:05 crc kubenswrapper[4890]: I0121 16:09:05.885192 4890 generic.go:334] "Generic (PLEG): container finished" podID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerID="4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96" exitCode=0 Jan 21 16:09:05 crc kubenswrapper[4890]: I0121 16:09:05.885243 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmr72" event={"ID":"296fbd9b-c3bd-479a-b617-81aff2eecbce","Type":"ContainerDied","Data":"4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96"} Jan 21 16:09:06 crc kubenswrapper[4890]: I0121 16:09:06.897959 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmr72" event={"ID":"296fbd9b-c3bd-479a-b617-81aff2eecbce","Type":"ContainerStarted","Data":"3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6"} Jan 21 16:09:06 crc kubenswrapper[4890]: I0121 16:09:06.901646 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcvt" event={"ID":"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55","Type":"ContainerStarted","Data":"cfc3eec1f62e84c4235b15404bcb2ac3a38ed26480fd8ac08ddeafe66e006611"} Jan 21 16:09:06 crc kubenswrapper[4890]: I0121 16:09:06.920745 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gmr72" podStartSLOduration=2.2655728379999998 podStartE2EDuration="6.920729123s" podCreationTimestamp="2026-01-21 16:09:00 +0000 UTC" firstStartedPulling="2026-01-21 16:09:01.853018756 +0000 UTC m=+2224.214461165" lastFinishedPulling="2026-01-21 16:09:06.508175041 +0000 UTC m=+2228.869617450" observedRunningTime="2026-01-21 16:09:06.914742164 +0000 UTC m=+2229.276184563" watchObservedRunningTime="2026-01-21 16:09:06.920729123 +0000 UTC m=+2229.282171532" Jan 21 16:09:07 crc kubenswrapper[4890]: I0121 16:09:07.910491 4890 generic.go:334] "Generic (PLEG): container finished" podID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerID="cfc3eec1f62e84c4235b15404bcb2ac3a38ed26480fd8ac08ddeafe66e006611" exitCode=0 Jan 21 16:09:07 crc kubenswrapper[4890]: I0121 16:09:07.910544 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcvt" event={"ID":"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55","Type":"ContainerDied","Data":"cfc3eec1f62e84c4235b15404bcb2ac3a38ed26480fd8ac08ddeafe66e006611"} Jan 21 16:09:09 crc kubenswrapper[4890]: I0121 16:09:09.929834 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcvt" event={"ID":"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55","Type":"ContainerStarted","Data":"bde84a38df8cde213858100906b7eb4390955a612fe12870160a7f6158378fd8"} Jan 21 16:09:09 crc kubenswrapper[4890]: I0121 16:09:09.950943 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hrcvt" podStartSLOduration=2.852505834 podStartE2EDuration="5.950927929s" podCreationTimestamp="2026-01-21 16:09:04 +0000 UTC" firstStartedPulling="2026-01-21 16:09:05.88439823 +0000 UTC m=+2228.245840639" lastFinishedPulling="2026-01-21 16:09:08.982820325 +0000 UTC m=+2231.344262734" observedRunningTime="2026-01-21 16:09:09.949316029 +0000 UTC m=+2232.310758438" watchObservedRunningTime="2026-01-21 16:09:09.950927929 +0000 UTC m=+2232.312370338" Jan 21 16:09:10 crc kubenswrapper[4890]: I0121 16:09:10.550329 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:10 crc kubenswrapper[4890]: I0121 16:09:10.550395 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:10 crc kubenswrapper[4890]: I0121 16:09:10.597573 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:14 crc kubenswrapper[4890]: I0121 16:09:14.756806 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:14 crc kubenswrapper[4890]: I0121 16:09:14.757174 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:14 crc kubenswrapper[4890]: I0121 16:09:14.800546 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:14 crc kubenswrapper[4890]: I0121 16:09:14.998739 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:15 crc kubenswrapper[4890]: I0121 16:09:15.042519 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrcvt"] Jan 21 16:09:16 crc kubenswrapper[4890]: I0121 16:09:16.973656 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hrcvt" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="registry-server" containerID="cri-o://bde84a38df8cde213858100906b7eb4390955a612fe12870160a7f6158378fd8" gracePeriod=2 Jan 21 16:09:18 crc kubenswrapper[4890]: I0121 16:09:18.991326 4890 generic.go:334] "Generic (PLEG): container finished" podID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerID="bde84a38df8cde213858100906b7eb4390955a612fe12870160a7f6158378fd8" exitCode=0 Jan 21 16:09:18 crc kubenswrapper[4890]: I0121 16:09:18.991733 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcvt" event={"ID":"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55","Type":"ContainerDied","Data":"bde84a38df8cde213858100906b7eb4390955a612fe12870160a7f6158378fd8"} Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.155831 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.307880 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-utilities\") pod \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.308080 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6zbq\" (UniqueName: \"kubernetes.io/projected/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-kube-api-access-x6zbq\") pod \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.308151 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-catalog-content\") pod \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\" (UID: \"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55\") " Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.308958 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-utilities" (OuterVolumeSpecName: "utilities") pod "8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" (UID: "8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.313497 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-kube-api-access-x6zbq" (OuterVolumeSpecName: "kube-api-access-x6zbq") pod "8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" (UID: "8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55"). InnerVolumeSpecName "kube-api-access-x6zbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.360137 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" (UID: "8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.410145 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6zbq\" (UniqueName: \"kubernetes.io/projected/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-kube-api-access-x6zbq\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.410190 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4890]: I0121 16:09:19.410199 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.000385 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcvt" event={"ID":"8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55","Type":"ContainerDied","Data":"97b31e9dcb9b06b67e9e85aa7edbb8119c5313e843addd28342fe306b1a86009"} Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.000445 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcvt" Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.000765 4890 scope.go:117] "RemoveContainer" containerID="bde84a38df8cde213858100906b7eb4390955a612fe12870160a7f6158378fd8" Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.024555 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrcvt"] Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.031983 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hrcvt"] Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.033102 4890 scope.go:117] "RemoveContainer" containerID="cfc3eec1f62e84c4235b15404bcb2ac3a38ed26480fd8ac08ddeafe66e006611" Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.056562 4890 scope.go:117] "RemoveContainer" containerID="d9f5e0f0f3f27ecf78efeddfe43887f9cd177261f343a2e2770dfff007e974fb" Jan 21 16:09:20 crc kubenswrapper[4890]: I0121 16:09:20.590233 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.384270 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gmr72"] Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.384510 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gmr72" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="registry-server" containerID="cri-o://3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6" gracePeriod=2 Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.783147 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.865635 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfgl\" (UniqueName: \"kubernetes.io/projected/296fbd9b-c3bd-479a-b617-81aff2eecbce-kube-api-access-wsfgl\") pod \"296fbd9b-c3bd-479a-b617-81aff2eecbce\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.865872 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-catalog-content\") pod \"296fbd9b-c3bd-479a-b617-81aff2eecbce\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.865914 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-utilities\") pod \"296fbd9b-c3bd-479a-b617-81aff2eecbce\" (UID: \"296fbd9b-c3bd-479a-b617-81aff2eecbce\") " Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.867742 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-utilities" (OuterVolumeSpecName: "utilities") pod "296fbd9b-c3bd-479a-b617-81aff2eecbce" (UID: "296fbd9b-c3bd-479a-b617-81aff2eecbce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.872517 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296fbd9b-c3bd-479a-b617-81aff2eecbce-kube-api-access-wsfgl" (OuterVolumeSpecName: "kube-api-access-wsfgl") pod "296fbd9b-c3bd-479a-b617-81aff2eecbce" (UID: "296fbd9b-c3bd-479a-b617-81aff2eecbce"). InnerVolumeSpecName "kube-api-access-wsfgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.918309 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "296fbd9b-c3bd-479a-b617-81aff2eecbce" (UID: "296fbd9b-c3bd-479a-b617-81aff2eecbce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.924181 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" path="/var/lib/kubelet/pods/8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55/volumes" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.968264 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfgl\" (UniqueName: \"kubernetes.io/projected/296fbd9b-c3bd-479a-b617-81aff2eecbce-kube-api-access-wsfgl\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.968313 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4890]: I0121 16:09:21.968328 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fbd9b-c3bd-479a-b617-81aff2eecbce-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.016519 4890 generic.go:334] "Generic (PLEG): container finished" podID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerID="3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6" exitCode=0 Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.016574 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmr72" event={"ID":"296fbd9b-c3bd-479a-b617-81aff2eecbce","Type":"ContainerDied","Data":"3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6"} Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.016612 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmr72" event={"ID":"296fbd9b-c3bd-479a-b617-81aff2eecbce","Type":"ContainerDied","Data":"199779e3034c253ddda403725276fa7de5233e474b556946619dced9aae3650d"} Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.016634 4890 scope.go:117] "RemoveContainer" containerID="3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.016887 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmr72" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.048123 4890 scope.go:117] "RemoveContainer" containerID="4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.050734 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gmr72"] Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.056873 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gmr72"] Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.070638 4890 scope.go:117] "RemoveContainer" containerID="18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.096082 4890 scope.go:117] "RemoveContainer" containerID="3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6" Jan 21 16:09:22 crc kubenswrapper[4890]: E0121 16:09:22.096710 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6\": container with ID starting with 3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6 not found: ID does not exist" containerID="3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.096747 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6"} err="failed to get container status \"3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6\": rpc error: code = NotFound desc = could not find container \"3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6\": container with ID starting with 3c87b22c1bb3f3040a95bcff82c27e2c638d27f0c3d0e48a01825be7c96debd6 not found: ID does not exist" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.096776 4890 scope.go:117] "RemoveContainer" containerID="4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96" Jan 21 16:09:22 crc kubenswrapper[4890]: E0121 16:09:22.097183 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96\": container with ID starting with 4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96 not found: ID does not exist" containerID="4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.097214 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96"} err="failed to get container status \"4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96\": rpc error: code = NotFound desc = could not find container \"4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96\": container with ID starting with 4cbf94479cca820c41a4804eb207c993beb996e9d4594d5142b39c112329ef96 not found: ID does not exist" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.097235 4890 scope.go:117] "RemoveContainer" containerID="18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a" Jan 21 16:09:22 crc kubenswrapper[4890]: E0121 16:09:22.097711 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a\": container with ID starting with 18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a not found: ID does not exist" containerID="18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a" Jan 21 16:09:22 crc kubenswrapper[4890]: I0121 16:09:22.097824 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a"} err="failed to get container status \"18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a\": rpc error: code = NotFound desc = could not find container \"18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a\": container with ID starting with 18e55b2dafb4b233696f90519ae7afc344dfb63d88e023c5906132a7c671c83a not found: ID does not exist" Jan 21 16:09:23 crc kubenswrapper[4890]: I0121 16:09:23.923402 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" path="/var/lib/kubelet/pods/296fbd9b-c3bd-479a-b617-81aff2eecbce/volumes" Jan 21 16:10:48 crc kubenswrapper[4890]: I0121 16:10:48.762016 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:10:48 crc kubenswrapper[4890]: I0121 16:10:48.762592 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:11:18 crc kubenswrapper[4890]: I0121 16:11:18.762647 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:11:18 crc kubenswrapper[4890]: I0121 16:11:18.763194 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.762551 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.763120 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.763174 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.763863 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.763918 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" gracePeriod=600 Jan 21 16:11:48 crc kubenswrapper[4890]: E0121 16:11:48.891115 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.968188 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" exitCode=0 Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.968236 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063"} Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.968277 4890 scope.go:117] "RemoveContainer" containerID="712ab443c28dddd67de6bfe92e08fe5c10c84c70c53493bc8ab47fee197a105e" Jan 21 16:11:48 crc kubenswrapper[4890]: I0121 16:11:48.968832 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:11:48 crc kubenswrapper[4890]: E0121 16:11:48.969169 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:11:59 crc kubenswrapper[4890]: I0121 16:11:59.913816 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:11:59 crc kubenswrapper[4890]: E0121 16:11:59.914755 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:12:10 crc kubenswrapper[4890]: I0121 16:12:10.915033 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:12:10 crc kubenswrapper[4890]: E0121 16:12:10.915883 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:12:24 crc kubenswrapper[4890]: I0121 16:12:24.914623 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:12:24 crc kubenswrapper[4890]: E0121 16:12:24.915396 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:12:36 crc kubenswrapper[4890]: I0121 16:12:36.913585 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:12:36 crc kubenswrapper[4890]: E0121 16:12:36.914255 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:12:47 crc kubenswrapper[4890]: I0121 16:12:47.927677 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:12:47 crc kubenswrapper[4890]: E0121 16:12:47.928300 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:13:01 crc kubenswrapper[4890]: I0121 16:13:01.914757 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:13:01 crc kubenswrapper[4890]: E0121 16:13:01.915569 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:13:15 crc kubenswrapper[4890]: I0121 16:13:15.914750 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:13:15 crc kubenswrapper[4890]: E0121 16:13:15.915533 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:13:29 crc kubenswrapper[4890]: I0121 16:13:29.914501 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:13:29 crc kubenswrapper[4890]: E0121 16:13:29.916517 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:13:43 crc kubenswrapper[4890]: I0121 16:13:43.914222 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:13:43 crc kubenswrapper[4890]: E0121 16:13:43.915108 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:13:55 crc kubenswrapper[4890]: I0121 16:13:55.914791 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:13:55 crc kubenswrapper[4890]: E0121 16:13:55.915593 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:14:07 crc kubenswrapper[4890]: I0121 16:14:07.917890 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:14:07 crc kubenswrapper[4890]: E0121 16:14:07.918670 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:14:19 crc kubenswrapper[4890]: I0121 16:14:19.913780 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:14:19 crc kubenswrapper[4890]: E0121 16:14:19.915121 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:14:33 crc kubenswrapper[4890]: I0121 16:14:33.914107 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:14:33 crc kubenswrapper[4890]: E0121 16:14:33.915074 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:14:48 crc kubenswrapper[4890]: I0121 16:14:48.914619 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:14:48 crc kubenswrapper[4890]: E0121 16:14:48.917125 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.143322 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs"] Jan 21 16:15:00 crc kubenswrapper[4890]: E0121 16:15:00.144065 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144080 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4890]: E0121 16:15:00.144097 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144105 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4890]: E0121 16:15:00.144118 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144124 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4890]: E0121 16:15:00.144131 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144137 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4890]: E0121 16:15:00.144146 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144152 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4890]: E0121 16:15:00.144173 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144183 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144332 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="296fbd9b-c3bd-479a-b617-81aff2eecbce" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.144348 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6738a4-a64c-41d6-ad44-e2f6aaa1cb55" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.145022 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.148104 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.150022 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.168325 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs"] Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.243995 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-267gn\" (UniqueName: \"kubernetes.io/projected/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-kube-api-access-267gn\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.244196 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-secret-volume\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.244284 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-config-volume\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.345223 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-secret-volume\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.345317 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-config-volume\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.345385 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-267gn\" (UniqueName: \"kubernetes.io/projected/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-kube-api-access-267gn\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.346570 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-config-volume\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.353472 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-secret-volume\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.364686 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-267gn\" (UniqueName: \"kubernetes.io/projected/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-kube-api-access-267gn\") pod \"collect-profiles-29483535-nkmgs\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.467046 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:00 crc kubenswrapper[4890]: I0121 16:15:00.730279 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs"] Jan 21 16:15:01 crc kubenswrapper[4890]: I0121 16:15:01.255479 4890 generic.go:334] "Generic (PLEG): container finished" podID="5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" containerID="67d0b258eab9d1234c7dcf0002c8c6d782d50270341f6881b73a53739639c90d" exitCode=0 Jan 21 16:15:01 crc kubenswrapper[4890]: I0121 16:15:01.255646 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" event={"ID":"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87","Type":"ContainerDied","Data":"67d0b258eab9d1234c7dcf0002c8c6d782d50270341f6881b73a53739639c90d"} Jan 21 16:15:01 crc kubenswrapper[4890]: I0121 16:15:01.255833 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" event={"ID":"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87","Type":"ContainerStarted","Data":"442d182eb6107be81d24ea1716c037780300e0fccf2cc4faada3bda069606826"} Jan 21 16:15:01 crc kubenswrapper[4890]: I0121 16:15:01.914662 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:15:01 crc kubenswrapper[4890]: E0121 16:15:01.914868 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.565719 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.693489 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-secret-volume\") pod \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.693579 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-config-volume\") pod \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.693611 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-267gn\" (UniqueName: \"kubernetes.io/projected/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-kube-api-access-267gn\") pod \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\" (UID: \"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87\") " Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.694917 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" (UID: "5b5f18fc-a813-4c74-ade7-0d3ef6c53d87"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.699911 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" (UID: "5b5f18fc-a813-4c74-ade7-0d3ef6c53d87"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.700134 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-kube-api-access-267gn" (OuterVolumeSpecName: "kube-api-access-267gn") pod "5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" (UID: "5b5f18fc-a813-4c74-ade7-0d3ef6c53d87"). InnerVolumeSpecName "kube-api-access-267gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.796015 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.796074 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-267gn\" (UniqueName: \"kubernetes.io/projected/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-kube-api-access-267gn\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:02 crc kubenswrapper[4890]: I0121 16:15:02.796090 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4890]: I0121 16:15:03.270335 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" event={"ID":"5b5f18fc-a813-4c74-ade7-0d3ef6c53d87","Type":"ContainerDied","Data":"442d182eb6107be81d24ea1716c037780300e0fccf2cc4faada3bda069606826"} Jan 21 16:15:03 crc kubenswrapper[4890]: I0121 16:15:03.270382 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="442d182eb6107be81d24ea1716c037780300e0fccf2cc4faada3bda069606826" Jan 21 16:15:03 crc kubenswrapper[4890]: I0121 16:15:03.270434 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs" Jan 21 16:15:03 crc kubenswrapper[4890]: I0121 16:15:03.635100 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn"] Jan 21 16:15:03 crc kubenswrapper[4890]: I0121 16:15:03.640935 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-vsckn"] Jan 21 16:15:03 crc kubenswrapper[4890]: I0121 16:15:03.929888 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5373aa1-b2ba-47c7-bbdb-1835b9758c77" path="/var/lib/kubelet/pods/d5373aa1-b2ba-47c7-bbdb-1835b9758c77/volumes" Jan 21 16:15:12 crc kubenswrapper[4890]: I0121 16:15:12.913784 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:15:12 crc kubenswrapper[4890]: E0121 16:15:12.914219 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:15:17 crc kubenswrapper[4890]: I0121 16:15:17.469324 4890 scope.go:117] "RemoveContainer" containerID="6c69e4fa8334a52053bc0b1cb73fcd66f218305c00dbea9f5738a65d4347ffb6" Jan 21 16:15:23 crc kubenswrapper[4890]: I0121 16:15:23.914908 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:15:23 crc kubenswrapper[4890]: E0121 16:15:23.915727 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:15:38 crc kubenswrapper[4890]: I0121 16:15:38.914260 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:15:38 crc kubenswrapper[4890]: E0121 16:15:38.914987 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:15:51 crc kubenswrapper[4890]: I0121 16:15:51.914602 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:15:51 crc kubenswrapper[4890]: E0121 16:15:51.915335 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:16:04 crc kubenswrapper[4890]: I0121 16:16:04.914544 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:16:04 crc kubenswrapper[4890]: E0121 16:16:04.915374 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:16:15 crc kubenswrapper[4890]: I0121 16:16:15.914779 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:16:15 crc kubenswrapper[4890]: E0121 16:16:15.915867 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:16:26 crc kubenswrapper[4890]: I0121 16:16:26.914039 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:16:26 crc kubenswrapper[4890]: E0121 16:16:26.914839 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:16:40 crc kubenswrapper[4890]: I0121 16:16:40.914767 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:16:40 crc kubenswrapper[4890]: E0121 16:16:40.915622 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:16:54 crc kubenswrapper[4890]: I0121 16:16:54.914744 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:16:55 crc kubenswrapper[4890]: I0121 16:16:55.669982 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"b6d5db392ccdd54eeb723f53cc6eb20c65891ec1b82a753f16ee04a3feb2e734"} Jan 21 16:18:11 crc kubenswrapper[4890]: I0121 16:18:11.883781 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b86rs"] Jan 21 16:18:11 crc kubenswrapper[4890]: E0121 16:18:11.884637 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" containerName="collect-profiles" Jan 21 16:18:11 crc kubenswrapper[4890]: I0121 16:18:11.884656 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" containerName="collect-profiles" Jan 21 16:18:11 crc kubenswrapper[4890]: I0121 16:18:11.884842 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" containerName="collect-profiles" Jan 21 16:18:11 crc kubenswrapper[4890]: I0121 16:18:11.885911 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:11 crc kubenswrapper[4890]: I0121 16:18:11.905561 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b86rs"] Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.000647 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-catalog-content\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.000716 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrtfq\" (UniqueName: \"kubernetes.io/projected/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-kube-api-access-xrtfq\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.000742 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-utilities\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.102362 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-catalog-content\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.102419 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrtfq\" (UniqueName: \"kubernetes.io/projected/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-kube-api-access-xrtfq\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.102445 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-utilities\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.102892 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-utilities\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.102930 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-catalog-content\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.126343 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrtfq\" (UniqueName: \"kubernetes.io/projected/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-kube-api-access-xrtfq\") pod \"redhat-operators-b86rs\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.206217 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:12 crc kubenswrapper[4890]: I0121 16:18:12.637894 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b86rs"] Jan 21 16:18:13 crc kubenswrapper[4890]: I0121 16:18:13.187430 4890 generic.go:334] "Generic (PLEG): container finished" podID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerID="fd7fc98eaa94ed21cbd7e51a9e4de04276b4599b8411d0a58a7ec29d67fbe776" exitCode=0 Jan 21 16:18:13 crc kubenswrapper[4890]: I0121 16:18:13.187542 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b86rs" event={"ID":"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8","Type":"ContainerDied","Data":"fd7fc98eaa94ed21cbd7e51a9e4de04276b4599b8411d0a58a7ec29d67fbe776"} Jan 21 16:18:13 crc kubenswrapper[4890]: I0121 16:18:13.187755 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b86rs" event={"ID":"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8","Type":"ContainerStarted","Data":"38a015d86e2e222f5775dd1a699e1229923af4d4649391a1ea6a11d92598ad45"} Jan 21 16:18:13 crc kubenswrapper[4890]: I0121 16:18:13.189400 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:18:15 crc kubenswrapper[4890]: I0121 16:18:15.201901 4890 generic.go:334] "Generic (PLEG): container finished" podID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerID="8d30c0dda78d9a2a7bdd94d53ad8914b6dc9f19fbb3324fabdf64adb923286ec" exitCode=0 Jan 21 16:18:15 crc kubenswrapper[4890]: I0121 16:18:15.201951 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b86rs" event={"ID":"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8","Type":"ContainerDied","Data":"8d30c0dda78d9a2a7bdd94d53ad8914b6dc9f19fbb3324fabdf64adb923286ec"} Jan 21 16:18:16 crc kubenswrapper[4890]: I0121 16:18:16.209953 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b86rs" event={"ID":"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8","Type":"ContainerStarted","Data":"095da4dda30287edb1b2522d5f17810ec6ad9063e812f053b58229e9a7e00163"} Jan 21 16:18:16 crc kubenswrapper[4890]: I0121 16:18:16.231480 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b86rs" podStartSLOduration=2.552386871 podStartE2EDuration="5.231463848s" podCreationTimestamp="2026-01-21 16:18:11 +0000 UTC" firstStartedPulling="2026-01-21 16:18:13.189080186 +0000 UTC m=+2775.550522595" lastFinishedPulling="2026-01-21 16:18:15.868157163 +0000 UTC m=+2778.229599572" observedRunningTime="2026-01-21 16:18:16.226275859 +0000 UTC m=+2778.587718278" watchObservedRunningTime="2026-01-21 16:18:16.231463848 +0000 UTC m=+2778.592906257" Jan 21 16:18:22 crc kubenswrapper[4890]: I0121 16:18:22.207284 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:22 crc kubenswrapper[4890]: I0121 16:18:22.208518 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:22 crc kubenswrapper[4890]: I0121 16:18:22.248160 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:22 crc kubenswrapper[4890]: I0121 16:18:22.302699 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:22 crc kubenswrapper[4890]: I0121 16:18:22.482169 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b86rs"] Jan 21 16:18:24 crc kubenswrapper[4890]: I0121 16:18:24.267870 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b86rs" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="registry-server" containerID="cri-o://095da4dda30287edb1b2522d5f17810ec6ad9063e812f053b58229e9a7e00163" gracePeriod=2 Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.297202 4890 generic.go:334] "Generic (PLEG): container finished" podID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerID="095da4dda30287edb1b2522d5f17810ec6ad9063e812f053b58229e9a7e00163" exitCode=0 Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.297751 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b86rs" event={"ID":"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8","Type":"ContainerDied","Data":"095da4dda30287edb1b2522d5f17810ec6ad9063e812f053b58229e9a7e00163"} Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.399124 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.548588 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-utilities\") pod \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.548664 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrtfq\" (UniqueName: \"kubernetes.io/projected/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-kube-api-access-xrtfq\") pod \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.548769 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-catalog-content\") pod \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\" (UID: \"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8\") " Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.552508 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-utilities" (OuterVolumeSpecName: "utilities") pod "8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" (UID: "8e52eb49-3a17-4518-b9c3-6f3c7787ddb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.559539 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-kube-api-access-xrtfq" (OuterVolumeSpecName: "kube-api-access-xrtfq") pod "8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" (UID: "8e52eb49-3a17-4518-b9c3-6f3c7787ddb8"). InnerVolumeSpecName "kube-api-access-xrtfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.649959 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.649998 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrtfq\" (UniqueName: \"kubernetes.io/projected/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-kube-api-access-xrtfq\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.666179 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" (UID: "8e52eb49-3a17-4518-b9c3-6f3c7787ddb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4890]: I0121 16:18:28.751104 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.307363 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b86rs" event={"ID":"8e52eb49-3a17-4518-b9c3-6f3c7787ddb8","Type":"ContainerDied","Data":"38a015d86e2e222f5775dd1a699e1229923af4d4649391a1ea6a11d92598ad45"} Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.307450 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b86rs" Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.307685 4890 scope.go:117] "RemoveContainer" containerID="095da4dda30287edb1b2522d5f17810ec6ad9063e812f053b58229e9a7e00163" Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.325401 4890 scope.go:117] "RemoveContainer" containerID="8d30c0dda78d9a2a7bdd94d53ad8914b6dc9f19fbb3324fabdf64adb923286ec" Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.339881 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b86rs"] Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.347634 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b86rs"] Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.366374 4890 scope.go:117] "RemoveContainer" containerID="fd7fc98eaa94ed21cbd7e51a9e4de04276b4599b8411d0a58a7ec29d67fbe776" Jan 21 16:18:29 crc kubenswrapper[4890]: I0121 16:18:29.929617 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" path="/var/lib/kubelet/pods/8e52eb49-3a17-4518-b9c3-6f3c7787ddb8/volumes" Jan 21 16:18:45 crc kubenswrapper[4890]: I0121 16:18:45.998698 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wcwvc"] Jan 21 16:18:46 crc kubenswrapper[4890]: E0121 16:18:45.999298 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="extract-content" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:45.999313 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="extract-content" Jan 21 16:18:46 crc kubenswrapper[4890]: E0121 16:18:45.999327 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="extract-utilities" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:45.999335 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="extract-utilities" Jan 21 16:18:46 crc kubenswrapper[4890]: E0121 16:18:45.999537 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="registry-server" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:45.999551 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="registry-server" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:45.999721 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52eb49-3a17-4518-b9c3-6f3c7787ddb8" containerName="registry-server" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.000818 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.012783 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcwvc"] Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.084663 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgn7v\" (UniqueName: \"kubernetes.io/projected/d17e076f-a4cc-4d33-8945-4215505005b1-kube-api-access-xgn7v\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.084735 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-utilities\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.084800 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-catalog-content\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.186362 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-catalog-content\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.186463 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgn7v\" (UniqueName: \"kubernetes.io/projected/d17e076f-a4cc-4d33-8945-4215505005b1-kube-api-access-xgn7v\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.186509 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-utilities\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.186931 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-catalog-content\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.187017 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-utilities\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.207163 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgn7v\" (UniqueName: \"kubernetes.io/projected/d17e076f-a4cc-4d33-8945-4215505005b1-kube-api-access-xgn7v\") pod \"redhat-marketplace-wcwvc\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.330499 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:46 crc kubenswrapper[4890]: I0121 16:18:46.833560 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcwvc"] Jan 21 16:18:47 crc kubenswrapper[4890]: I0121 16:18:47.446064 4890 generic.go:334] "Generic (PLEG): container finished" podID="d17e076f-a4cc-4d33-8945-4215505005b1" containerID="8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8" exitCode=0 Jan 21 16:18:47 crc kubenswrapper[4890]: I0121 16:18:47.446138 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcwvc" event={"ID":"d17e076f-a4cc-4d33-8945-4215505005b1","Type":"ContainerDied","Data":"8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8"} Jan 21 16:18:47 crc kubenswrapper[4890]: I0121 16:18:47.446419 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcwvc" event={"ID":"d17e076f-a4cc-4d33-8945-4215505005b1","Type":"ContainerStarted","Data":"e34d412b39707d77c415546d8a508d21fb76552f81390fcf2bcc72b1e1e7f6e7"} Jan 21 16:18:48 crc kubenswrapper[4890]: I0121 16:18:48.455467 4890 generic.go:334] "Generic (PLEG): container finished" podID="d17e076f-a4cc-4d33-8945-4215505005b1" containerID="96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e" exitCode=0 Jan 21 16:18:48 crc kubenswrapper[4890]: I0121 16:18:48.455520 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcwvc" event={"ID":"d17e076f-a4cc-4d33-8945-4215505005b1","Type":"ContainerDied","Data":"96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e"} Jan 21 16:18:49 crc kubenswrapper[4890]: I0121 16:18:49.463709 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcwvc" event={"ID":"d17e076f-a4cc-4d33-8945-4215505005b1","Type":"ContainerStarted","Data":"2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f"} Jan 21 16:18:49 crc kubenswrapper[4890]: I0121 16:18:49.483632 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wcwvc" podStartSLOduration=3.1066982850000002 podStartE2EDuration="4.483614223s" podCreationTimestamp="2026-01-21 16:18:45 +0000 UTC" firstStartedPulling="2026-01-21 16:18:47.447842003 +0000 UTC m=+2809.809284412" lastFinishedPulling="2026-01-21 16:18:48.824757951 +0000 UTC m=+2811.186200350" observedRunningTime="2026-01-21 16:18:49.480232989 +0000 UTC m=+2811.841675398" watchObservedRunningTime="2026-01-21 16:18:49.483614223 +0000 UTC m=+2811.845056632" Jan 21 16:18:56 crc kubenswrapper[4890]: I0121 16:18:56.331450 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:56 crc kubenswrapper[4890]: I0121 16:18:56.332011 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:56 crc kubenswrapper[4890]: I0121 16:18:56.369557 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:56 crc kubenswrapper[4890]: I0121 16:18:56.557678 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:56 crc kubenswrapper[4890]: I0121 16:18:56.603514 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcwvc"] Jan 21 16:18:58 crc kubenswrapper[4890]: I0121 16:18:58.522563 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wcwvc" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="registry-server" containerID="cri-o://2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f" gracePeriod=2 Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.398392 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.544484 4890 generic.go:334] "Generic (PLEG): container finished" podID="d17e076f-a4cc-4d33-8945-4215505005b1" containerID="2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f" exitCode=0 Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.544527 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcwvc" event={"ID":"d17e076f-a4cc-4d33-8945-4215505005b1","Type":"ContainerDied","Data":"2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f"} Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.544558 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcwvc" event={"ID":"d17e076f-a4cc-4d33-8945-4215505005b1","Type":"ContainerDied","Data":"e34d412b39707d77c415546d8a508d21fb76552f81390fcf2bcc72b1e1e7f6e7"} Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.544574 4890 scope.go:117] "RemoveContainer" containerID="2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.544608 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcwvc" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.547752 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-utilities\") pod \"d17e076f-a4cc-4d33-8945-4215505005b1\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.547902 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-catalog-content\") pod \"d17e076f-a4cc-4d33-8945-4215505005b1\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.547986 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgn7v\" (UniqueName: \"kubernetes.io/projected/d17e076f-a4cc-4d33-8945-4215505005b1-kube-api-access-xgn7v\") pod \"d17e076f-a4cc-4d33-8945-4215505005b1\" (UID: \"d17e076f-a4cc-4d33-8945-4215505005b1\") " Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.549114 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-utilities" (OuterVolumeSpecName: "utilities") pod "d17e076f-a4cc-4d33-8945-4215505005b1" (UID: "d17e076f-a4cc-4d33-8945-4215505005b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.552250 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17e076f-a4cc-4d33-8945-4215505005b1-kube-api-access-xgn7v" (OuterVolumeSpecName: "kube-api-access-xgn7v") pod "d17e076f-a4cc-4d33-8945-4215505005b1" (UID: "d17e076f-a4cc-4d33-8945-4215505005b1"). InnerVolumeSpecName "kube-api-access-xgn7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.569056 4890 scope.go:117] "RemoveContainer" containerID="96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.571871 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d17e076f-a4cc-4d33-8945-4215505005b1" (UID: "d17e076f-a4cc-4d33-8945-4215505005b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.585692 4890 scope.go:117] "RemoveContainer" containerID="8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.610975 4890 scope.go:117] "RemoveContainer" containerID="2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f" Jan 21 16:18:59 crc kubenswrapper[4890]: E0121 16:18:59.611485 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f\": container with ID starting with 2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f not found: ID does not exist" containerID="2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.611596 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f"} err="failed to get container status \"2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f\": rpc error: code = NotFound desc = could not find container \"2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f\": container with ID starting with 2dbced88933ef955bc54210c4685d035bb2867af30009445721027ca4be3ed4f not found: ID does not exist" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.611681 4890 scope.go:117] "RemoveContainer" containerID="96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e" Jan 21 16:18:59 crc kubenswrapper[4890]: E0121 16:18:59.612054 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e\": container with ID starting with 96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e not found: ID does not exist" containerID="96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.612148 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e"} err="failed to get container status \"96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e\": rpc error: code = NotFound desc = could not find container \"96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e\": container with ID starting with 96784e23c6e31c62f704856ac822bc7b5a29b76914400ce84e71848dc331a88e not found: ID does not exist" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.612214 4890 scope.go:117] "RemoveContainer" containerID="8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8" Jan 21 16:18:59 crc kubenswrapper[4890]: E0121 16:18:59.612575 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8\": container with ID starting with 8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8 not found: ID does not exist" containerID="8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.612622 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8"} err="failed to get container status \"8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8\": rpc error: code = NotFound desc = could not find container \"8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8\": container with ID starting with 8e7fd20c71bc2363b11bb3165d386107ebf21cf07b07d7c21e6fa89ec98e81e8 not found: ID does not exist" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.650121 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.650384 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17e076f-a4cc-4d33-8945-4215505005b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.650465 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgn7v\" (UniqueName: \"kubernetes.io/projected/d17e076f-a4cc-4d33-8945-4215505005b1-kube-api-access-xgn7v\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.874519 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcwvc"] Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.881785 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcwvc"] Jan 21 16:18:59 crc kubenswrapper[4890]: I0121 16:18:59.925757 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" path="/var/lib/kubelet/pods/d17e076f-a4cc-4d33-8945-4215505005b1/volumes" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.523752 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f8p7r"] Jan 21 16:19:11 crc kubenswrapper[4890]: E0121 16:19:11.524680 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="registry-server" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.524697 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="registry-server" Jan 21 16:19:11 crc kubenswrapper[4890]: E0121 16:19:11.524711 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="extract-utilities" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.524720 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="extract-utilities" Jan 21 16:19:11 crc kubenswrapper[4890]: E0121 16:19:11.524740 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="extract-content" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.524750 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="extract-content" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.524930 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17e076f-a4cc-4d33-8945-4215505005b1" containerName="registry-server" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.526488 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.536391 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8p7r"] Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.706166 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-utilities\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.706233 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8b5\" (UniqueName: \"kubernetes.io/projected/d9de05c5-eea5-4a04-a2ea-da9ced87a071-kube-api-access-7n8b5\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.706306 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-catalog-content\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.808071 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-catalog-content\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.808184 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-utilities\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.808230 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8b5\" (UniqueName: \"kubernetes.io/projected/d9de05c5-eea5-4a04-a2ea-da9ced87a071-kube-api-access-7n8b5\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.809485 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-catalog-content\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.809589 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-utilities\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.836153 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8b5\" (UniqueName: \"kubernetes.io/projected/d9de05c5-eea5-4a04-a2ea-da9ced87a071-kube-api-access-7n8b5\") pod \"certified-operators-f8p7r\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:11 crc kubenswrapper[4890]: I0121 16:19:11.883855 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:12 crc kubenswrapper[4890]: I0121 16:19:12.373884 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8p7r"] Jan 21 16:19:12 crc kubenswrapper[4890]: I0121 16:19:12.632309 4890 generic.go:334] "Generic (PLEG): container finished" podID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerID="845487834cbc6040f5e55baa70ff9d129b0bf77cb8447eae65a14774af5d3690" exitCode=0 Jan 21 16:19:12 crc kubenswrapper[4890]: I0121 16:19:12.632365 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8p7r" event={"ID":"d9de05c5-eea5-4a04-a2ea-da9ced87a071","Type":"ContainerDied","Data":"845487834cbc6040f5e55baa70ff9d129b0bf77cb8447eae65a14774af5d3690"} Jan 21 16:19:12 crc kubenswrapper[4890]: I0121 16:19:12.632388 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8p7r" event={"ID":"d9de05c5-eea5-4a04-a2ea-da9ced87a071","Type":"ContainerStarted","Data":"3c770940640df8e959c53c92cd1646fbceffcfb3e625d17ab5ade259ac149c25"} Jan 21 16:19:13 crc kubenswrapper[4890]: I0121 16:19:13.645010 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8p7r" event={"ID":"d9de05c5-eea5-4a04-a2ea-da9ced87a071","Type":"ContainerStarted","Data":"57588e5ee9f37cc6df73a2a0cd702744e11d966e1e664bc41166df7755cde999"} Jan 21 16:19:14 crc kubenswrapper[4890]: I0121 16:19:14.655099 4890 generic.go:334] "Generic (PLEG): container finished" podID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerID="57588e5ee9f37cc6df73a2a0cd702744e11d966e1e664bc41166df7755cde999" exitCode=0 Jan 21 16:19:14 crc kubenswrapper[4890]: I0121 16:19:14.655200 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8p7r" event={"ID":"d9de05c5-eea5-4a04-a2ea-da9ced87a071","Type":"ContainerDied","Data":"57588e5ee9f37cc6df73a2a0cd702744e11d966e1e664bc41166df7755cde999"} Jan 21 16:19:15 crc kubenswrapper[4890]: I0121 16:19:15.662797 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8p7r" event={"ID":"d9de05c5-eea5-4a04-a2ea-da9ced87a071","Type":"ContainerStarted","Data":"50ec729f33774011786882264bc155f11ea11c8da66f25021ad108d627a60fb6"} Jan 21 16:19:15 crc kubenswrapper[4890]: I0121 16:19:15.687080 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f8p7r" podStartSLOduration=2.2688641179999998 podStartE2EDuration="4.687045389s" podCreationTimestamp="2026-01-21 16:19:11 +0000 UTC" firstStartedPulling="2026-01-21 16:19:12.634093175 +0000 UTC m=+2834.995535594" lastFinishedPulling="2026-01-21 16:19:15.052274456 +0000 UTC m=+2837.413716865" observedRunningTime="2026-01-21 16:19:15.684570658 +0000 UTC m=+2838.046013087" watchObservedRunningTime="2026-01-21 16:19:15.687045389 +0000 UTC m=+2838.048487808" Jan 21 16:19:18 crc kubenswrapper[4890]: I0121 16:19:18.762041 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:19:18 crc kubenswrapper[4890]: I0121 16:19:18.762396 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.150792 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-krt5t"] Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.152782 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.165375 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krt5t"] Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.305857 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxpqn\" (UniqueName: \"kubernetes.io/projected/38016ab2-7365-4796-9756-4487fdda7db5-kube-api-access-mxpqn\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.305907 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-utilities\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.305994 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-catalog-content\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.407160 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-catalog-content\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.407236 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxpqn\" (UniqueName: \"kubernetes.io/projected/38016ab2-7365-4796-9756-4487fdda7db5-kube-api-access-mxpqn\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.407255 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-utilities\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.407727 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-catalog-content\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.407770 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-utilities\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.428290 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxpqn\" (UniqueName: \"kubernetes.io/projected/38016ab2-7365-4796-9756-4487fdda7db5-kube-api-access-mxpqn\") pod \"community-operators-krt5t\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.474125 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:19 crc kubenswrapper[4890]: I0121 16:19:19.764332 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krt5t"] Jan 21 16:19:19 crc kubenswrapper[4890]: W0121 16:19:19.766274 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38016ab2_7365_4796_9756_4487fdda7db5.slice/crio-3607ca4662480b7e524b48eb4ea397ee7dd73d92255125b42d84138ce4ef8365 WatchSource:0}: Error finding container 3607ca4662480b7e524b48eb4ea397ee7dd73d92255125b42d84138ce4ef8365: Status 404 returned error can't find the container with id 3607ca4662480b7e524b48eb4ea397ee7dd73d92255125b42d84138ce4ef8365 Jan 21 16:19:20 crc kubenswrapper[4890]: I0121 16:19:20.706668 4890 generic.go:334] "Generic (PLEG): container finished" podID="38016ab2-7365-4796-9756-4487fdda7db5" containerID="7cf419cfda2291a0f61be57eaa058f764021cad5e81ea167bfd84cbf22ed21b1" exitCode=0 Jan 21 16:19:20 crc kubenswrapper[4890]: I0121 16:19:20.706784 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krt5t" event={"ID":"38016ab2-7365-4796-9756-4487fdda7db5","Type":"ContainerDied","Data":"7cf419cfda2291a0f61be57eaa058f764021cad5e81ea167bfd84cbf22ed21b1"} Jan 21 16:19:20 crc kubenswrapper[4890]: I0121 16:19:20.706967 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krt5t" event={"ID":"38016ab2-7365-4796-9756-4487fdda7db5","Type":"ContainerStarted","Data":"3607ca4662480b7e524b48eb4ea397ee7dd73d92255125b42d84138ce4ef8365"} Jan 21 16:19:21 crc kubenswrapper[4890]: I0121 16:19:21.714932 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krt5t" event={"ID":"38016ab2-7365-4796-9756-4487fdda7db5","Type":"ContainerStarted","Data":"31f9d7a0328d7b3f9b9da5632f0f7f6d957bfd79156e17d5f672a37b0c0e56fd"} Jan 21 16:19:21 crc kubenswrapper[4890]: I0121 16:19:21.884388 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:21 crc kubenswrapper[4890]: I0121 16:19:21.884451 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:21 crc kubenswrapper[4890]: I0121 16:19:21.923878 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:22 crc kubenswrapper[4890]: I0121 16:19:22.728516 4890 generic.go:334] "Generic (PLEG): container finished" podID="38016ab2-7365-4796-9756-4487fdda7db5" containerID="31f9d7a0328d7b3f9b9da5632f0f7f6d957bfd79156e17d5f672a37b0c0e56fd" exitCode=0 Jan 21 16:19:22 crc kubenswrapper[4890]: I0121 16:19:22.728616 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krt5t" event={"ID":"38016ab2-7365-4796-9756-4487fdda7db5","Type":"ContainerDied","Data":"31f9d7a0328d7b3f9b9da5632f0f7f6d957bfd79156e17d5f672a37b0c0e56fd"} Jan 21 16:19:22 crc kubenswrapper[4890]: I0121 16:19:22.771554 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:23 crc kubenswrapper[4890]: I0121 16:19:23.740024 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krt5t" event={"ID":"38016ab2-7365-4796-9756-4487fdda7db5","Type":"ContainerStarted","Data":"d57e1881345bacf4dc5f8693d2002fcd3c7a916db7ee5f24ed3ef0f9a362ebf8"} Jan 21 16:19:23 crc kubenswrapper[4890]: I0121 16:19:23.760268 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-krt5t" podStartSLOduration=2.282362758 podStartE2EDuration="4.760244475s" podCreationTimestamp="2026-01-21 16:19:19 +0000 UTC" firstStartedPulling="2026-01-21 16:19:20.708969583 +0000 UTC m=+2843.070412002" lastFinishedPulling="2026-01-21 16:19:23.18685131 +0000 UTC m=+2845.548293719" observedRunningTime="2026-01-21 16:19:23.757276792 +0000 UTC m=+2846.118719201" watchObservedRunningTime="2026-01-21 16:19:23.760244475 +0000 UTC m=+2846.121686884" Jan 21 16:19:24 crc kubenswrapper[4890]: I0121 16:19:24.301169 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8p7r"] Jan 21 16:19:24 crc kubenswrapper[4890]: I0121 16:19:24.746249 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f8p7r" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="registry-server" containerID="cri-o://50ec729f33774011786882264bc155f11ea11c8da66f25021ad108d627a60fb6" gracePeriod=2 Jan 21 16:19:25 crc kubenswrapper[4890]: I0121 16:19:25.756158 4890 generic.go:334] "Generic (PLEG): container finished" podID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerID="50ec729f33774011786882264bc155f11ea11c8da66f25021ad108d627a60fb6" exitCode=0 Jan 21 16:19:25 crc kubenswrapper[4890]: I0121 16:19:25.756203 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8p7r" event={"ID":"d9de05c5-eea5-4a04-a2ea-da9ced87a071","Type":"ContainerDied","Data":"50ec729f33774011786882264bc155f11ea11c8da66f25021ad108d627a60fb6"} Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.505489 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.526248 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-catalog-content\") pod \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.526430 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-utilities\") pod \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.527556 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-utilities" (OuterVolumeSpecName: "utilities") pod "d9de05c5-eea5-4a04-a2ea-da9ced87a071" (UID: "d9de05c5-eea5-4a04-a2ea-da9ced87a071"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.526469 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n8b5\" (UniqueName: \"kubernetes.io/projected/d9de05c5-eea5-4a04-a2ea-da9ced87a071-kube-api-access-7n8b5\") pod \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\" (UID: \"d9de05c5-eea5-4a04-a2ea-da9ced87a071\") " Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.527875 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.544789 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9de05c5-eea5-4a04-a2ea-da9ced87a071-kube-api-access-7n8b5" (OuterVolumeSpecName: "kube-api-access-7n8b5") pod "d9de05c5-eea5-4a04-a2ea-da9ced87a071" (UID: "d9de05c5-eea5-4a04-a2ea-da9ced87a071"). InnerVolumeSpecName "kube-api-access-7n8b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.588928 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9de05c5-eea5-4a04-a2ea-da9ced87a071" (UID: "d9de05c5-eea5-4a04-a2ea-da9ced87a071"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.628293 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9de05c5-eea5-4a04-a2ea-da9ced87a071-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.628328 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n8b5\" (UniqueName: \"kubernetes.io/projected/d9de05c5-eea5-4a04-a2ea-da9ced87a071-kube-api-access-7n8b5\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.766529 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8p7r" event={"ID":"d9de05c5-eea5-4a04-a2ea-da9ced87a071","Type":"ContainerDied","Data":"3c770940640df8e959c53c92cd1646fbceffcfb3e625d17ab5ade259ac149c25"} Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.766597 4890 scope.go:117] "RemoveContainer" containerID="50ec729f33774011786882264bc155f11ea11c8da66f25021ad108d627a60fb6" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.766611 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8p7r" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.802026 4890 scope.go:117] "RemoveContainer" containerID="57588e5ee9f37cc6df73a2a0cd702744e11d966e1e664bc41166df7755cde999" Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.806293 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8p7r"] Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.811662 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f8p7r"] Jan 21 16:19:26 crc kubenswrapper[4890]: I0121 16:19:26.817047 4890 scope.go:117] "RemoveContainer" containerID="845487834cbc6040f5e55baa70ff9d129b0bf77cb8447eae65a14774af5d3690" Jan 21 16:19:27 crc kubenswrapper[4890]: I0121 16:19:27.921678 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" path="/var/lib/kubelet/pods/d9de05c5-eea5-4a04-a2ea-da9ced87a071/volumes" Jan 21 16:19:29 crc kubenswrapper[4890]: I0121 16:19:29.475711 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:29 crc kubenswrapper[4890]: I0121 16:19:29.476079 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:29 crc kubenswrapper[4890]: I0121 16:19:29.514679 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:29 crc kubenswrapper[4890]: I0121 16:19:29.822589 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:30 crc kubenswrapper[4890]: I0121 16:19:30.502625 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krt5t"] Jan 21 16:19:31 crc kubenswrapper[4890]: I0121 16:19:31.796936 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-krt5t" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="registry-server" containerID="cri-o://d57e1881345bacf4dc5f8693d2002fcd3c7a916db7ee5f24ed3ef0f9a362ebf8" gracePeriod=2 Jan 21 16:19:33 crc kubenswrapper[4890]: I0121 16:19:33.830628 4890 generic.go:334] "Generic (PLEG): container finished" podID="38016ab2-7365-4796-9756-4487fdda7db5" containerID="d57e1881345bacf4dc5f8693d2002fcd3c7a916db7ee5f24ed3ef0f9a362ebf8" exitCode=0 Jan 21 16:19:33 crc kubenswrapper[4890]: I0121 16:19:33.830924 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krt5t" event={"ID":"38016ab2-7365-4796-9756-4487fdda7db5","Type":"ContainerDied","Data":"d57e1881345bacf4dc5f8693d2002fcd3c7a916db7ee5f24ed3ef0f9a362ebf8"} Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.096396 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.236858 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-catalog-content\") pod \"38016ab2-7365-4796-9756-4487fdda7db5\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.236952 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-utilities\") pod \"38016ab2-7365-4796-9756-4487fdda7db5\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.237090 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxpqn\" (UniqueName: \"kubernetes.io/projected/38016ab2-7365-4796-9756-4487fdda7db5-kube-api-access-mxpqn\") pod \"38016ab2-7365-4796-9756-4487fdda7db5\" (UID: \"38016ab2-7365-4796-9756-4487fdda7db5\") " Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.240309 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-utilities" (OuterVolumeSpecName: "utilities") pod "38016ab2-7365-4796-9756-4487fdda7db5" (UID: "38016ab2-7365-4796-9756-4487fdda7db5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.249697 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38016ab2-7365-4796-9756-4487fdda7db5-kube-api-access-mxpqn" (OuterVolumeSpecName: "kube-api-access-mxpqn") pod "38016ab2-7365-4796-9756-4487fdda7db5" (UID: "38016ab2-7365-4796-9756-4487fdda7db5"). InnerVolumeSpecName "kube-api-access-mxpqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.306597 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38016ab2-7365-4796-9756-4487fdda7db5" (UID: "38016ab2-7365-4796-9756-4487fdda7db5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.338995 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxpqn\" (UniqueName: \"kubernetes.io/projected/38016ab2-7365-4796-9756-4487fdda7db5-kube-api-access-mxpqn\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.339044 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.339056 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38016ab2-7365-4796-9756-4487fdda7db5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.842421 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krt5t" event={"ID":"38016ab2-7365-4796-9756-4487fdda7db5","Type":"ContainerDied","Data":"3607ca4662480b7e524b48eb4ea397ee7dd73d92255125b42d84138ce4ef8365"} Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.842480 4890 scope.go:117] "RemoveContainer" containerID="d57e1881345bacf4dc5f8693d2002fcd3c7a916db7ee5f24ed3ef0f9a362ebf8" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.842564 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krt5t" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.860543 4890 scope.go:117] "RemoveContainer" containerID="31f9d7a0328d7b3f9b9da5632f0f7f6d957bfd79156e17d5f672a37b0c0e56fd" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.881016 4890 scope.go:117] "RemoveContainer" containerID="7cf419cfda2291a0f61be57eaa058f764021cad5e81ea167bfd84cbf22ed21b1" Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.883461 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krt5t"] Jan 21 16:19:34 crc kubenswrapper[4890]: I0121 16:19:34.889247 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-krt5t"] Jan 21 16:19:35 crc kubenswrapper[4890]: I0121 16:19:35.932895 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38016ab2-7365-4796-9756-4487fdda7db5" path="/var/lib/kubelet/pods/38016ab2-7365-4796-9756-4487fdda7db5/volumes" Jan 21 16:19:48 crc kubenswrapper[4890]: I0121 16:19:48.766157 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:19:48 crc kubenswrapper[4890]: I0121 16:19:48.767110 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:20:18 crc kubenswrapper[4890]: I0121 16:20:18.762631 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:20:18 crc kubenswrapper[4890]: I0121 16:20:18.763152 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:20:18 crc kubenswrapper[4890]: I0121 16:20:18.763192 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:20:18 crc kubenswrapper[4890]: I0121 16:20:18.763803 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6d5db392ccdd54eeb723f53cc6eb20c65891ec1b82a753f16ee04a3feb2e734"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:20:18 crc kubenswrapper[4890]: I0121 16:20:18.763847 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://b6d5db392ccdd54eeb723f53cc6eb20c65891ec1b82a753f16ee04a3feb2e734" gracePeriod=600 Jan 21 16:20:19 crc kubenswrapper[4890]: I0121 16:20:19.139053 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="b6d5db392ccdd54eeb723f53cc6eb20c65891ec1b82a753f16ee04a3feb2e734" exitCode=0 Jan 21 16:20:19 crc kubenswrapper[4890]: I0121 16:20:19.139120 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"b6d5db392ccdd54eeb723f53cc6eb20c65891ec1b82a753f16ee04a3feb2e734"} Jan 21 16:20:19 crc kubenswrapper[4890]: I0121 16:20:19.139424 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb"} Jan 21 16:20:19 crc kubenswrapper[4890]: I0121 16:20:19.139451 4890 scope.go:117] "RemoveContainer" containerID="0959f275218fe7ae6fa39827bdf8ed4a04fd3e7497cfbde7113fd5d78e116063" Jan 21 16:22:48 crc kubenswrapper[4890]: I0121 16:22:48.761845 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:22:48 crc kubenswrapper[4890]: I0121 16:22:48.762337 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:23:18 crc kubenswrapper[4890]: I0121 16:23:18.761796 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:23:18 crc kubenswrapper[4890]: I0121 16:23:18.762400 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:23:48 crc kubenswrapper[4890]: I0121 16:23:48.763605 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:23:48 crc kubenswrapper[4890]: I0121 16:23:48.764215 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:23:48 crc kubenswrapper[4890]: I0121 16:23:48.764287 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:23:48 crc kubenswrapper[4890]: I0121 16:23:48.765053 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:23:48 crc kubenswrapper[4890]: I0121 16:23:48.765109 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" gracePeriod=600 Jan 21 16:23:48 crc kubenswrapper[4890]: E0121 16:23:48.885378 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:23:49 crc kubenswrapper[4890]: I0121 16:23:49.613294 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" exitCode=0 Jan 21 16:23:49 crc kubenswrapper[4890]: I0121 16:23:49.613369 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb"} Jan 21 16:23:49 crc kubenswrapper[4890]: I0121 16:23:49.613429 4890 scope.go:117] "RemoveContainer" containerID="b6d5db392ccdd54eeb723f53cc6eb20c65891ec1b82a753f16ee04a3feb2e734" Jan 21 16:23:49 crc kubenswrapper[4890]: I0121 16:23:49.628091 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:23:49 crc kubenswrapper[4890]: E0121 16:23:49.634225 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:24:02 crc kubenswrapper[4890]: I0121 16:24:02.914376 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:24:02 crc kubenswrapper[4890]: E0121 16:24:02.915782 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:24:15 crc kubenswrapper[4890]: I0121 16:24:15.915243 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:24:15 crc kubenswrapper[4890]: E0121 16:24:15.916035 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:24:27 crc kubenswrapper[4890]: I0121 16:24:27.935189 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:24:27 crc kubenswrapper[4890]: E0121 16:24:27.937369 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:24:41 crc kubenswrapper[4890]: I0121 16:24:41.913878 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:24:41 crc kubenswrapper[4890]: E0121 16:24:41.914810 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:24:56 crc kubenswrapper[4890]: I0121 16:24:56.913900 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:24:56 crc kubenswrapper[4890]: E0121 16:24:56.914690 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:25:08 crc kubenswrapper[4890]: I0121 16:25:08.914592 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:25:08 crc kubenswrapper[4890]: E0121 16:25:08.915466 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:25:23 crc kubenswrapper[4890]: I0121 16:25:23.915055 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:25:23 crc kubenswrapper[4890]: E0121 16:25:23.916101 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:25:35 crc kubenswrapper[4890]: I0121 16:25:35.914695 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:25:35 crc kubenswrapper[4890]: E0121 16:25:35.915430 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:25:49 crc kubenswrapper[4890]: I0121 16:25:49.914604 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:25:49 crc kubenswrapper[4890]: E0121 16:25:49.915309 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:26:04 crc kubenswrapper[4890]: I0121 16:26:04.914300 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:26:04 crc kubenswrapper[4890]: E0121 16:26:04.915137 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:26:19 crc kubenswrapper[4890]: I0121 16:26:19.914246 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:26:19 crc kubenswrapper[4890]: E0121 16:26:19.914991 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:26:33 crc kubenswrapper[4890]: I0121 16:26:33.914182 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:26:33 crc kubenswrapper[4890]: E0121 16:26:33.915306 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:26:48 crc kubenswrapper[4890]: I0121 16:26:48.914561 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:26:48 crc kubenswrapper[4890]: E0121 16:26:48.915110 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:27:00 crc kubenswrapper[4890]: I0121 16:27:00.914788 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:27:00 crc kubenswrapper[4890]: E0121 16:27:00.917261 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:27:13 crc kubenswrapper[4890]: I0121 16:27:13.914115 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:27:13 crc kubenswrapper[4890]: E0121 16:27:13.914813 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:27:25 crc kubenswrapper[4890]: I0121 16:27:25.914222 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:27:25 crc kubenswrapper[4890]: E0121 16:27:25.914968 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:27:39 crc kubenswrapper[4890]: I0121 16:27:39.914450 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:27:39 crc kubenswrapper[4890]: E0121 16:27:39.915364 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:27:53 crc kubenswrapper[4890]: I0121 16:27:53.914042 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:27:53 crc kubenswrapper[4890]: E0121 16:27:53.914788 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:28:08 crc kubenswrapper[4890]: I0121 16:28:08.913969 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:28:08 crc kubenswrapper[4890]: E0121 16:28:08.914821 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:28:19 crc kubenswrapper[4890]: I0121 16:28:19.914075 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:28:19 crc kubenswrapper[4890]: E0121 16:28:19.914795 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:28:30 crc kubenswrapper[4890]: I0121 16:28:30.914431 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:28:30 crc kubenswrapper[4890]: E0121 16:28:30.915164 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:28:43 crc kubenswrapper[4890]: I0121 16:28:43.914950 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:28:43 crc kubenswrapper[4890]: E0121 16:28:43.916099 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:28:54 crc kubenswrapper[4890]: I0121 16:28:54.914263 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:28:56 crc kubenswrapper[4890]: I0121 16:28:56.122948 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"02fbcb98dedd37c7be52f8f98f7e0f54d9e4fa175b8070234096b1ad7c7d0957"} Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.859413 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-txclz"] Jan 21 16:29:21 crc kubenswrapper[4890]: E0121 16:29:21.860193 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="registry-server" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860209 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="registry-server" Jan 21 16:29:21 crc kubenswrapper[4890]: E0121 16:29:21.860234 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="extract-content" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860240 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="extract-content" Jan 21 16:29:21 crc kubenswrapper[4890]: E0121 16:29:21.860250 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="extract-utilities" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860256 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="extract-utilities" Jan 21 16:29:21 crc kubenswrapper[4890]: E0121 16:29:21.860265 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="extract-content" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860272 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="extract-content" Jan 21 16:29:21 crc kubenswrapper[4890]: E0121 16:29:21.860281 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="registry-server" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860286 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="registry-server" Jan 21 16:29:21 crc kubenswrapper[4890]: E0121 16:29:21.860298 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="extract-utilities" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860304 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="extract-utilities" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860443 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9de05c5-eea5-4a04-a2ea-da9ced87a071" containerName="registry-server" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.860452 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="38016ab2-7365-4796-9756-4487fdda7db5" containerName="registry-server" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.861346 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.877934 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txclz"] Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.926309 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-catalog-content\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.926598 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-utilities\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:21 crc kubenswrapper[4890]: I0121 16:29:21.926685 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzp7l\" (UniqueName: \"kubernetes.io/projected/f7c405bb-d1b6-4ff4-b577-3435087dca40-kube-api-access-xzp7l\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.027802 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzp7l\" (UniqueName: \"kubernetes.io/projected/f7c405bb-d1b6-4ff4-b577-3435087dca40-kube-api-access-xzp7l\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.027882 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-catalog-content\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.027940 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-utilities\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.028474 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-catalog-content\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.028540 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-utilities\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.046707 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzp7l\" (UniqueName: \"kubernetes.io/projected/f7c405bb-d1b6-4ff4-b577-3435087dca40-kube-api-access-xzp7l\") pod \"certified-operators-txclz\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.197161 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:22 crc kubenswrapper[4890]: I0121 16:29:22.449409 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txclz"] Jan 21 16:29:23 crc kubenswrapper[4890]: I0121 16:29:23.284713 4890 generic.go:334] "Generic (PLEG): container finished" podID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerID="01300b94ef66b8ea24258f276a922e4a51ef7e5bdd650de0c5804d4ee49ec086" exitCode=0 Jan 21 16:29:23 crc kubenswrapper[4890]: I0121 16:29:23.284807 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txclz" event={"ID":"f7c405bb-d1b6-4ff4-b577-3435087dca40","Type":"ContainerDied","Data":"01300b94ef66b8ea24258f276a922e4a51ef7e5bdd650de0c5804d4ee49ec086"} Jan 21 16:29:23 crc kubenswrapper[4890]: I0121 16:29:23.285023 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txclz" event={"ID":"f7c405bb-d1b6-4ff4-b577-3435087dca40","Type":"ContainerStarted","Data":"a539a723c8d586571689a4f92c37f6cd8f37e5da5265d101b83af1aba0eb66ae"} Jan 21 16:29:23 crc kubenswrapper[4890]: I0121 16:29:23.286476 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:29:24 crc kubenswrapper[4890]: I0121 16:29:24.294562 4890 generic.go:334] "Generic (PLEG): container finished" podID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerID="a9f45094c2888e271515c6059088d0bd74879add4c281c53cd56fa264bffdb2c" exitCode=0 Jan 21 16:29:24 crc kubenswrapper[4890]: I0121 16:29:24.294655 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txclz" event={"ID":"f7c405bb-d1b6-4ff4-b577-3435087dca40","Type":"ContainerDied","Data":"a9f45094c2888e271515c6059088d0bd74879add4c281c53cd56fa264bffdb2c"} Jan 21 16:29:25 crc kubenswrapper[4890]: I0121 16:29:25.302952 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txclz" event={"ID":"f7c405bb-d1b6-4ff4-b577-3435087dca40","Type":"ContainerStarted","Data":"1cefea2656e01c11d999188f0c839ab2b8807717b59df3b59387e2b1e0e7f4bc"} Jan 21 16:29:25 crc kubenswrapper[4890]: I0121 16:29:25.328178 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-txclz" podStartSLOduration=2.569740974 podStartE2EDuration="4.328157707s" podCreationTimestamp="2026-01-21 16:29:21 +0000 UTC" firstStartedPulling="2026-01-21 16:29:23.286216408 +0000 UTC m=+3445.647658817" lastFinishedPulling="2026-01-21 16:29:25.044633141 +0000 UTC m=+3447.406075550" observedRunningTime="2026-01-21 16:29:25.316932508 +0000 UTC m=+3447.678374917" watchObservedRunningTime="2026-01-21 16:29:25.328157707 +0000 UTC m=+3447.689600116" Jan 21 16:29:32 crc kubenswrapper[4890]: I0121 16:29:32.198052 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:32 crc kubenswrapper[4890]: I0121 16:29:32.198594 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:32 crc kubenswrapper[4890]: I0121 16:29:32.237222 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:32 crc kubenswrapper[4890]: I0121 16:29:32.403605 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:32 crc kubenswrapper[4890]: I0121 16:29:32.468738 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txclz"] Jan 21 16:29:34 crc kubenswrapper[4890]: I0121 16:29:34.368731 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-txclz" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="registry-server" containerID="cri-o://1cefea2656e01c11d999188f0c839ab2b8807717b59df3b59387e2b1e0e7f4bc" gracePeriod=2 Jan 21 16:29:35 crc kubenswrapper[4890]: I0121 16:29:35.379785 4890 generic.go:334] "Generic (PLEG): container finished" podID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerID="1cefea2656e01c11d999188f0c839ab2b8807717b59df3b59387e2b1e0e7f4bc" exitCode=0 Jan 21 16:29:35 crc kubenswrapper[4890]: I0121 16:29:35.380127 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txclz" event={"ID":"f7c405bb-d1b6-4ff4-b577-3435087dca40","Type":"ContainerDied","Data":"1cefea2656e01c11d999188f0c839ab2b8807717b59df3b59387e2b1e0e7f4bc"} Jan 21 16:29:35 crc kubenswrapper[4890]: I0121 16:29:35.908827 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.028579 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzp7l\" (UniqueName: \"kubernetes.io/projected/f7c405bb-d1b6-4ff4-b577-3435087dca40-kube-api-access-xzp7l\") pod \"f7c405bb-d1b6-4ff4-b577-3435087dca40\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.028776 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-catalog-content\") pod \"f7c405bb-d1b6-4ff4-b577-3435087dca40\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.028819 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-utilities\") pod \"f7c405bb-d1b6-4ff4-b577-3435087dca40\" (UID: \"f7c405bb-d1b6-4ff4-b577-3435087dca40\") " Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.029865 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-utilities" (OuterVolumeSpecName: "utilities") pod "f7c405bb-d1b6-4ff4-b577-3435087dca40" (UID: "f7c405bb-d1b6-4ff4-b577-3435087dca40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.034288 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c405bb-d1b6-4ff4-b577-3435087dca40-kube-api-access-xzp7l" (OuterVolumeSpecName: "kube-api-access-xzp7l") pod "f7c405bb-d1b6-4ff4-b577-3435087dca40" (UID: "f7c405bb-d1b6-4ff4-b577-3435087dca40"). InnerVolumeSpecName "kube-api-access-xzp7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.072696 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7c405bb-d1b6-4ff4-b577-3435087dca40" (UID: "f7c405bb-d1b6-4ff4-b577-3435087dca40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.130920 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.130996 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c405bb-d1b6-4ff4-b577-3435087dca40-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.131009 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzp7l\" (UniqueName: \"kubernetes.io/projected/f7c405bb-d1b6-4ff4-b577-3435087dca40-kube-api-access-xzp7l\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.387698 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txclz" event={"ID":"f7c405bb-d1b6-4ff4-b577-3435087dca40","Type":"ContainerDied","Data":"a539a723c8d586571689a4f92c37f6cd8f37e5da5265d101b83af1aba0eb66ae"} Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.387770 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txclz" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.388051 4890 scope.go:117] "RemoveContainer" containerID="1cefea2656e01c11d999188f0c839ab2b8807717b59df3b59387e2b1e0e7f4bc" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.408647 4890 scope.go:117] "RemoveContainer" containerID="a9f45094c2888e271515c6059088d0bd74879add4c281c53cd56fa264bffdb2c" Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.418515 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txclz"] Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.424652 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-txclz"] Jan 21 16:29:36 crc kubenswrapper[4890]: I0121 16:29:36.445719 4890 scope.go:117] "RemoveContainer" containerID="01300b94ef66b8ea24258f276a922e4a51ef7e5bdd650de0c5804d4ee49ec086" Jan 21 16:29:37 crc kubenswrapper[4890]: I0121 16:29:37.922389 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" path="/var/lib/kubelet/pods/f7c405bb-d1b6-4ff4-b577-3435087dca40/volumes" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.149255 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666"] Jan 21 16:30:00 crc kubenswrapper[4890]: E0121 16:30:00.150713 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="extract-utilities" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.150736 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="extract-utilities" Jan 21 16:30:00 crc kubenswrapper[4890]: E0121 16:30:00.150776 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="extract-content" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.150785 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="extract-content" Jan 21 16:30:00 crc kubenswrapper[4890]: E0121 16:30:00.150802 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.150812 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.151016 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c405bb-d1b6-4ff4-b577-3435087dca40" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.151884 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.163305 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.163469 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.164409 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3206ab55-f23f-439b-af10-6fbadd2548f5-config-volume\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.164487 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3206ab55-f23f-439b-af10-6fbadd2548f5-secret-volume\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.164543 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkff\" (UniqueName: \"kubernetes.io/projected/3206ab55-f23f-439b-af10-6fbadd2548f5-kube-api-access-4gkff\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.164538 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666"] Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.264970 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3206ab55-f23f-439b-af10-6fbadd2548f5-config-volume\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.265013 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3206ab55-f23f-439b-af10-6fbadd2548f5-secret-volume\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.265061 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkff\" (UniqueName: \"kubernetes.io/projected/3206ab55-f23f-439b-af10-6fbadd2548f5-kube-api-access-4gkff\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.266223 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3206ab55-f23f-439b-af10-6fbadd2548f5-config-volume\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.270704 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3206ab55-f23f-439b-af10-6fbadd2548f5-secret-volume\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.288124 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkff\" (UniqueName: \"kubernetes.io/projected/3206ab55-f23f-439b-af10-6fbadd2548f5-kube-api-access-4gkff\") pod \"collect-profiles-29483550-w6666\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.481812 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:00 crc kubenswrapper[4890]: I0121 16:30:00.915369 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666"] Jan 21 16:30:01 crc kubenswrapper[4890]: I0121 16:30:01.593854 4890 generic.go:334] "Generic (PLEG): container finished" podID="3206ab55-f23f-439b-af10-6fbadd2548f5" containerID="1d1d79f2eb8203ed88d882d14aa14010923b9782c447c1b680f9016930ca7cd4" exitCode=0 Jan 21 16:30:01 crc kubenswrapper[4890]: I0121 16:30:01.593960 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" event={"ID":"3206ab55-f23f-439b-af10-6fbadd2548f5","Type":"ContainerDied","Data":"1d1d79f2eb8203ed88d882d14aa14010923b9782c447c1b680f9016930ca7cd4"} Jan 21 16:30:01 crc kubenswrapper[4890]: I0121 16:30:01.594156 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" event={"ID":"3206ab55-f23f-439b-af10-6fbadd2548f5","Type":"ContainerStarted","Data":"b34752111239f63cbc62e6f2b1519ec0fa1d12d7d7012ca58c86898ac504f967"} Jan 21 16:30:02 crc kubenswrapper[4890]: I0121 16:30:02.875298 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:02 crc kubenswrapper[4890]: I0121 16:30:02.904714 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3206ab55-f23f-439b-af10-6fbadd2548f5-secret-volume\") pod \"3206ab55-f23f-439b-af10-6fbadd2548f5\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " Jan 21 16:30:02 crc kubenswrapper[4890]: I0121 16:30:02.904772 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gkff\" (UniqueName: \"kubernetes.io/projected/3206ab55-f23f-439b-af10-6fbadd2548f5-kube-api-access-4gkff\") pod \"3206ab55-f23f-439b-af10-6fbadd2548f5\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " Jan 21 16:30:02 crc kubenswrapper[4890]: I0121 16:30:02.904811 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3206ab55-f23f-439b-af10-6fbadd2548f5-config-volume\") pod \"3206ab55-f23f-439b-af10-6fbadd2548f5\" (UID: \"3206ab55-f23f-439b-af10-6fbadd2548f5\") " Jan 21 16:30:02 crc kubenswrapper[4890]: I0121 16:30:02.905570 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3206ab55-f23f-439b-af10-6fbadd2548f5-config-volume" (OuterVolumeSpecName: "config-volume") pod "3206ab55-f23f-439b-af10-6fbadd2548f5" (UID: "3206ab55-f23f-439b-af10-6fbadd2548f5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:02 crc kubenswrapper[4890]: I0121 16:30:02.912756 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3206ab55-f23f-439b-af10-6fbadd2548f5-kube-api-access-4gkff" (OuterVolumeSpecName: "kube-api-access-4gkff") pod "3206ab55-f23f-439b-af10-6fbadd2548f5" (UID: "3206ab55-f23f-439b-af10-6fbadd2548f5"). InnerVolumeSpecName "kube-api-access-4gkff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:02 crc kubenswrapper[4890]: I0121 16:30:02.913524 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3206ab55-f23f-439b-af10-6fbadd2548f5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3206ab55-f23f-439b-af10-6fbadd2548f5" (UID: "3206ab55-f23f-439b-af10-6fbadd2548f5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.006110 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gkff\" (UniqueName: \"kubernetes.io/projected/3206ab55-f23f-439b-af10-6fbadd2548f5-kube-api-access-4gkff\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.006154 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3206ab55-f23f-439b-af10-6fbadd2548f5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.006167 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3206ab55-f23f-439b-af10-6fbadd2548f5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.610340 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" event={"ID":"3206ab55-f23f-439b-af10-6fbadd2548f5","Type":"ContainerDied","Data":"b34752111239f63cbc62e6f2b1519ec0fa1d12d7d7012ca58c86898ac504f967"} Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.610822 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b34752111239f63cbc62e6f2b1519ec0fa1d12d7d7012ca58c86898ac504f967" Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.610406 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666" Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.937480 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd"] Jan 21 16:30:03 crc kubenswrapper[4890]: I0121 16:30:03.942247 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-wfgrd"] Jan 21 16:30:05 crc kubenswrapper[4890]: I0121 16:30:05.922496 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fa99d1-fd28-4795-ae17-06728e1cf697" path="/var/lib/kubelet/pods/25fa99d1-fd28-4795-ae17-06728e1cf697/volumes" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.498306 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nt4kz"] Jan 21 16:30:11 crc kubenswrapper[4890]: E0121 16:30:11.501927 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3206ab55-f23f-439b-af10-6fbadd2548f5" containerName="collect-profiles" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.501951 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3206ab55-f23f-439b-af10-6fbadd2548f5" containerName="collect-profiles" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.502118 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="3206ab55-f23f-439b-af10-6fbadd2548f5" containerName="collect-profiles" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.503256 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.511716 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt4kz"] Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.622626 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-catalog-content\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.622691 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-utilities\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.622728 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n695\" (UniqueName: \"kubernetes.io/projected/da358dc9-c0f4-44d5-88d2-d63df22d73ac-kube-api-access-6n695\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.723942 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-catalog-content\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.724002 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-utilities\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.724034 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n695\" (UniqueName: \"kubernetes.io/projected/da358dc9-c0f4-44d5-88d2-d63df22d73ac-kube-api-access-6n695\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.724542 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-catalog-content\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.724597 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-utilities\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.747911 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n695\" (UniqueName: \"kubernetes.io/projected/da358dc9-c0f4-44d5-88d2-d63df22d73ac-kube-api-access-6n695\") pod \"redhat-marketplace-nt4kz\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:11 crc kubenswrapper[4890]: I0121 16:30:11.819943 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:12 crc kubenswrapper[4890]: I0121 16:30:12.248998 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt4kz"] Jan 21 16:30:12 crc kubenswrapper[4890]: I0121 16:30:12.673046 4890 generic.go:334] "Generic (PLEG): container finished" podID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerID="027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3" exitCode=0 Jan 21 16:30:12 crc kubenswrapper[4890]: I0121 16:30:12.673096 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt4kz" event={"ID":"da358dc9-c0f4-44d5-88d2-d63df22d73ac","Type":"ContainerDied","Data":"027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3"} Jan 21 16:30:12 crc kubenswrapper[4890]: I0121 16:30:12.673125 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt4kz" event={"ID":"da358dc9-c0f4-44d5-88d2-d63df22d73ac","Type":"ContainerStarted","Data":"13699672c6719177ce290bccd8164da7aa7d3570f5670e2f2704cdd803eed9c6"} Jan 21 16:30:14 crc kubenswrapper[4890]: I0121 16:30:14.690342 4890 generic.go:334] "Generic (PLEG): container finished" podID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerID="87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460" exitCode=0 Jan 21 16:30:14 crc kubenswrapper[4890]: I0121 16:30:14.690432 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt4kz" event={"ID":"da358dc9-c0f4-44d5-88d2-d63df22d73ac","Type":"ContainerDied","Data":"87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460"} Jan 21 16:30:16 crc kubenswrapper[4890]: I0121 16:30:16.707958 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt4kz" event={"ID":"da358dc9-c0f4-44d5-88d2-d63df22d73ac","Type":"ContainerStarted","Data":"9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed"} Jan 21 16:30:16 crc kubenswrapper[4890]: I0121 16:30:16.731790 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nt4kz" podStartSLOduration=2.432250451 podStartE2EDuration="5.731764838s" podCreationTimestamp="2026-01-21 16:30:11 +0000 UTC" firstStartedPulling="2026-01-21 16:30:12.676822811 +0000 UTC m=+3495.038265220" lastFinishedPulling="2026-01-21 16:30:15.976337198 +0000 UTC m=+3498.337779607" observedRunningTime="2026-01-21 16:30:16.72501594 +0000 UTC m=+3499.086458349" watchObservedRunningTime="2026-01-21 16:30:16.731764838 +0000 UTC m=+3499.093207267" Jan 21 16:30:17 crc kubenswrapper[4890]: I0121 16:30:17.856611 4890 scope.go:117] "RemoveContainer" containerID="80253fc5641dbee354b7744e857ad42a0dc7b0bb8eec90ec632c1ec3a250170b" Jan 21 16:30:21 crc kubenswrapper[4890]: I0121 16:30:21.820717 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:21 crc kubenswrapper[4890]: I0121 16:30:21.821540 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:21 crc kubenswrapper[4890]: I0121 16:30:21.877835 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.118860 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wcp9"] Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.120335 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.129736 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wcp9"] Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.283808 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-catalog-content\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.283854 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk77x\" (UniqueName: \"kubernetes.io/projected/09bfb294-704b-42ff-bcbf-fafd02787b50-kube-api-access-bk77x\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.283888 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-utilities\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.385762 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-catalog-content\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.386112 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk77x\" (UniqueName: \"kubernetes.io/projected/09bfb294-704b-42ff-bcbf-fafd02787b50-kube-api-access-bk77x\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.386148 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-utilities\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.386302 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-catalog-content\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.386597 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-utilities\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.406853 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk77x\" (UniqueName: \"kubernetes.io/projected/09bfb294-704b-42ff-bcbf-fafd02787b50-kube-api-access-bk77x\") pod \"community-operators-7wcp9\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.437854 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.793931 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:22 crc kubenswrapper[4890]: I0121 16:30:22.921300 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wcp9"] Jan 21 16:30:22 crc kubenswrapper[4890]: W0121 16:30:22.932692 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09bfb294_704b_42ff_bcbf_fafd02787b50.slice/crio-0b792295ab83856c19ea2b540fa4f95d4186122a1087b24702ccac9ecceb5ece WatchSource:0}: Error finding container 0b792295ab83856c19ea2b540fa4f95d4186122a1087b24702ccac9ecceb5ece: Status 404 returned error can't find the container with id 0b792295ab83856c19ea2b540fa4f95d4186122a1087b24702ccac9ecceb5ece Jan 21 16:30:23 crc kubenswrapper[4890]: I0121 16:30:23.758222 4890 generic.go:334] "Generic (PLEG): container finished" podID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerID="0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092" exitCode=0 Jan 21 16:30:23 crc kubenswrapper[4890]: I0121 16:30:23.758319 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcp9" event={"ID":"09bfb294-704b-42ff-bcbf-fafd02787b50","Type":"ContainerDied","Data":"0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092"} Jan 21 16:30:23 crc kubenswrapper[4890]: I0121 16:30:23.758615 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcp9" event={"ID":"09bfb294-704b-42ff-bcbf-fafd02787b50","Type":"ContainerStarted","Data":"0b792295ab83856c19ea2b540fa4f95d4186122a1087b24702ccac9ecceb5ece"} Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.108644 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt4kz"] Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.109124 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nt4kz" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="registry-server" containerID="cri-o://9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed" gracePeriod=2 Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.610294 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.749806 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-utilities\") pod \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.749897 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-catalog-content\") pod \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.749974 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n695\" (UniqueName: \"kubernetes.io/projected/da358dc9-c0f4-44d5-88d2-d63df22d73ac-kube-api-access-6n695\") pod \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\" (UID: \"da358dc9-c0f4-44d5-88d2-d63df22d73ac\") " Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.750915 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-utilities" (OuterVolumeSpecName: "utilities") pod "da358dc9-c0f4-44d5-88d2-d63df22d73ac" (UID: "da358dc9-c0f4-44d5-88d2-d63df22d73ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.757119 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da358dc9-c0f4-44d5-88d2-d63df22d73ac-kube-api-access-6n695" (OuterVolumeSpecName: "kube-api-access-6n695") pod "da358dc9-c0f4-44d5-88d2-d63df22d73ac" (UID: "da358dc9-c0f4-44d5-88d2-d63df22d73ac"). InnerVolumeSpecName "kube-api-access-6n695". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.775555 4890 generic.go:334] "Generic (PLEG): container finished" podID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerID="e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7" exitCode=0 Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.775637 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcp9" event={"ID":"09bfb294-704b-42ff-bcbf-fafd02787b50","Type":"ContainerDied","Data":"e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7"} Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.778368 4890 generic.go:334] "Generic (PLEG): container finished" podID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerID="9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed" exitCode=0 Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.778404 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt4kz" event={"ID":"da358dc9-c0f4-44d5-88d2-d63df22d73ac","Type":"ContainerDied","Data":"9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed"} Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.778428 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt4kz" event={"ID":"da358dc9-c0f4-44d5-88d2-d63df22d73ac","Type":"ContainerDied","Data":"13699672c6719177ce290bccd8164da7aa7d3570f5670e2f2704cdd803eed9c6"} Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.778446 4890 scope.go:117] "RemoveContainer" containerID="9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.778541 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt4kz" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.785006 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da358dc9-c0f4-44d5-88d2-d63df22d73ac" (UID: "da358dc9-c0f4-44d5-88d2-d63df22d73ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.802448 4890 scope.go:117] "RemoveContainer" containerID="87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.823648 4890 scope.go:117] "RemoveContainer" containerID="027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.839040 4890 scope.go:117] "RemoveContainer" containerID="9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed" Jan 21 16:30:25 crc kubenswrapper[4890]: E0121 16:30:25.839720 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed\": container with ID starting with 9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed not found: ID does not exist" containerID="9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.839778 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed"} err="failed to get container status \"9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed\": rpc error: code = NotFound desc = could not find container \"9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed\": container with ID starting with 9ef89f905380748aa06e0adec907b42c4b1a10e42a323bf0c3e68ed00317e5ed not found: ID does not exist" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.839811 4890 scope.go:117] "RemoveContainer" containerID="87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460" Jan 21 16:30:25 crc kubenswrapper[4890]: E0121 16:30:25.840249 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460\": container with ID starting with 87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460 not found: ID does not exist" containerID="87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.840292 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460"} err="failed to get container status \"87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460\": rpc error: code = NotFound desc = could not find container \"87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460\": container with ID starting with 87b9be66a3fd16b563106c32efeb4ed1cb4eef121f0c167c98db49c83d79e460 not found: ID does not exist" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.840315 4890 scope.go:117] "RemoveContainer" containerID="027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3" Jan 21 16:30:25 crc kubenswrapper[4890]: E0121 16:30:25.840635 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3\": container with ID starting with 027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3 not found: ID does not exist" containerID="027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.840657 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3"} err="failed to get container status \"027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3\": rpc error: code = NotFound desc = could not find container \"027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3\": container with ID starting with 027699a328413cad68b66ac5a50444cb16decabcde755e2e66b6aad385f94dd3 not found: ID does not exist" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.851710 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.851743 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da358dc9-c0f4-44d5-88d2-d63df22d73ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:25 crc kubenswrapper[4890]: I0121 16:30:25.851756 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n695\" (UniqueName: \"kubernetes.io/projected/da358dc9-c0f4-44d5-88d2-d63df22d73ac-kube-api-access-6n695\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:26 crc kubenswrapper[4890]: I0121 16:30:26.110304 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt4kz"] Jan 21 16:30:26 crc kubenswrapper[4890]: I0121 16:30:26.116767 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt4kz"] Jan 21 16:30:26 crc kubenswrapper[4890]: I0121 16:30:26.786743 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcp9" event={"ID":"09bfb294-704b-42ff-bcbf-fafd02787b50","Type":"ContainerStarted","Data":"2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95"} Jan 21 16:30:26 crc kubenswrapper[4890]: I0121 16:30:26.802707 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wcp9" podStartSLOduration=2.368185128 podStartE2EDuration="4.802687508s" podCreationTimestamp="2026-01-21 16:30:22 +0000 UTC" firstStartedPulling="2026-01-21 16:30:23.759601603 +0000 UTC m=+3506.121044012" lastFinishedPulling="2026-01-21 16:30:26.194103983 +0000 UTC m=+3508.555546392" observedRunningTime="2026-01-21 16:30:26.800951905 +0000 UTC m=+3509.162394324" watchObservedRunningTime="2026-01-21 16:30:26.802687508 +0000 UTC m=+3509.164129917" Jan 21 16:30:27 crc kubenswrapper[4890]: I0121 16:30:27.921631 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" path="/var/lib/kubelet/pods/da358dc9-c0f4-44d5-88d2-d63df22d73ac/volumes" Jan 21 16:30:32 crc kubenswrapper[4890]: I0121 16:30:32.438958 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:32 crc kubenswrapper[4890]: I0121 16:30:32.439320 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:32 crc kubenswrapper[4890]: I0121 16:30:32.516265 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:32 crc kubenswrapper[4890]: I0121 16:30:32.886098 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:32 crc kubenswrapper[4890]: I0121 16:30:32.937588 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wcp9"] Jan 21 16:30:34 crc kubenswrapper[4890]: I0121 16:30:34.843855 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wcp9" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="registry-server" containerID="cri-o://2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95" gracePeriod=2 Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.197320 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.379653 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-utilities\") pod \"09bfb294-704b-42ff-bcbf-fafd02787b50\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.379735 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk77x\" (UniqueName: \"kubernetes.io/projected/09bfb294-704b-42ff-bcbf-fafd02787b50-kube-api-access-bk77x\") pod \"09bfb294-704b-42ff-bcbf-fafd02787b50\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.379769 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-catalog-content\") pod \"09bfb294-704b-42ff-bcbf-fafd02787b50\" (UID: \"09bfb294-704b-42ff-bcbf-fafd02787b50\") " Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.380578 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-utilities" (OuterVolumeSpecName: "utilities") pod "09bfb294-704b-42ff-bcbf-fafd02787b50" (UID: "09bfb294-704b-42ff-bcbf-fafd02787b50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.385726 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bfb294-704b-42ff-bcbf-fafd02787b50-kube-api-access-bk77x" (OuterVolumeSpecName: "kube-api-access-bk77x") pod "09bfb294-704b-42ff-bcbf-fafd02787b50" (UID: "09bfb294-704b-42ff-bcbf-fafd02787b50"). InnerVolumeSpecName "kube-api-access-bk77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.444045 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09bfb294-704b-42ff-bcbf-fafd02787b50" (UID: "09bfb294-704b-42ff-bcbf-fafd02787b50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.481717 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.481758 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk77x\" (UniqueName: \"kubernetes.io/projected/09bfb294-704b-42ff-bcbf-fafd02787b50-kube-api-access-bk77x\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.481771 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09bfb294-704b-42ff-bcbf-fafd02787b50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.853181 4890 generic.go:334] "Generic (PLEG): container finished" podID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerID="2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95" exitCode=0 Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.853227 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcp9" event={"ID":"09bfb294-704b-42ff-bcbf-fafd02787b50","Type":"ContainerDied","Data":"2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95"} Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.853470 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcp9" event={"ID":"09bfb294-704b-42ff-bcbf-fafd02787b50","Type":"ContainerDied","Data":"0b792295ab83856c19ea2b540fa4f95d4186122a1087b24702ccac9ecceb5ece"} Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.853494 4890 scope.go:117] "RemoveContainer" containerID="2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.853268 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcp9" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.873854 4890 scope.go:117] "RemoveContainer" containerID="e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.887675 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wcp9"] Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.894222 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wcp9"] Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.905642 4890 scope.go:117] "RemoveContainer" containerID="0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.924879 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" path="/var/lib/kubelet/pods/09bfb294-704b-42ff-bcbf-fafd02787b50/volumes" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.929537 4890 scope.go:117] "RemoveContainer" containerID="2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95" Jan 21 16:30:35 crc kubenswrapper[4890]: E0121 16:30:35.929871 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95\": container with ID starting with 2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95 not found: ID does not exist" containerID="2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.929902 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95"} err="failed to get container status \"2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95\": rpc error: code = NotFound desc = could not find container \"2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95\": container with ID starting with 2827a0ac7725cc0a8edcab8c282402d14e8ea102df49f39a16487f43dd623d95 not found: ID does not exist" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.929924 4890 scope.go:117] "RemoveContainer" containerID="e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7" Jan 21 16:30:35 crc kubenswrapper[4890]: E0121 16:30:35.930196 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7\": container with ID starting with e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7 not found: ID does not exist" containerID="e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.930219 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7"} err="failed to get container status \"e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7\": rpc error: code = NotFound desc = could not find container \"e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7\": container with ID starting with e5f93c3d0a0da27fe543846df4eefa07ff1f9a2a08b3073cb8e40a2959f544a7 not found: ID does not exist" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.930236 4890 scope.go:117] "RemoveContainer" containerID="0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092" Jan 21 16:30:35 crc kubenswrapper[4890]: E0121 16:30:35.930842 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092\": container with ID starting with 0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092 not found: ID does not exist" containerID="0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092" Jan 21 16:30:35 crc kubenswrapper[4890]: I0121 16:30:35.930890 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092"} err="failed to get container status \"0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092\": rpc error: code = NotFound desc = could not find container \"0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092\": container with ID starting with 0413b50c97d0771c088ed3965d7be7409a4fb1d5b5ac5270310a7a3b306fe092 not found: ID does not exist" Jan 21 16:31:18 crc kubenswrapper[4890]: I0121 16:31:18.762981 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:31:18 crc kubenswrapper[4890]: I0121 16:31:18.764054 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.099455 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-46dxk"] Jan 21 16:31:37 crc kubenswrapper[4890]: E0121 16:31:37.100828 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="extract-content" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.100846 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="extract-content" Jan 21 16:31:37 crc kubenswrapper[4890]: E0121 16:31:37.100869 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="registry-server" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.100877 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="registry-server" Jan 21 16:31:37 crc kubenswrapper[4890]: E0121 16:31:37.100888 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="extract-content" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.100896 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="extract-content" Jan 21 16:31:37 crc kubenswrapper[4890]: E0121 16:31:37.100915 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="extract-utilities" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.100922 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="extract-utilities" Jan 21 16:31:37 crc kubenswrapper[4890]: E0121 16:31:37.100948 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="extract-utilities" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.100956 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="extract-utilities" Jan 21 16:31:37 crc kubenswrapper[4890]: E0121 16:31:37.100972 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="registry-server" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.100979 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="registry-server" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.101173 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bfb294-704b-42ff-bcbf-fafd02787b50" containerName="registry-server" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.101200 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="da358dc9-c0f4-44d5-88d2-d63df22d73ac" containerName="registry-server" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.102658 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.115383 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46dxk"] Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.245850 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-catalog-content\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.245900 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-utilities\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.246242 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjhh\" (UniqueName: \"kubernetes.io/projected/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-kube-api-access-lpjhh\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.347313 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjhh\" (UniqueName: \"kubernetes.io/projected/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-kube-api-access-lpjhh\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.347396 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-catalog-content\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.347433 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-utilities\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.348051 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-utilities\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.348085 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-catalog-content\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.368270 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjhh\" (UniqueName: \"kubernetes.io/projected/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-kube-api-access-lpjhh\") pod \"redhat-operators-46dxk\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.466251 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:37 crc kubenswrapper[4890]: I0121 16:31:37.874777 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46dxk"] Jan 21 16:31:38 crc kubenswrapper[4890]: E0121 16:31:38.186937 4890 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87fccfdd_5a99_426b_a3f8_9f9686e1bbb3.slice/crio-1fa95d8978eda9ff1f310d6924379847118cd78344ed24f912bd5dab30b15b5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87fccfdd_5a99_426b_a3f8_9f9686e1bbb3.slice/crio-conmon-1fa95d8978eda9ff1f310d6924379847118cd78344ed24f912bd5dab30b15b5a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:31:38 crc kubenswrapper[4890]: I0121 16:31:38.273628 4890 generic.go:334] "Generic (PLEG): container finished" podID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerID="1fa95d8978eda9ff1f310d6924379847118cd78344ed24f912bd5dab30b15b5a" exitCode=0 Jan 21 16:31:38 crc kubenswrapper[4890]: I0121 16:31:38.273685 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46dxk" event={"ID":"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3","Type":"ContainerDied","Data":"1fa95d8978eda9ff1f310d6924379847118cd78344ed24f912bd5dab30b15b5a"} Jan 21 16:31:38 crc kubenswrapper[4890]: I0121 16:31:38.273708 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46dxk" event={"ID":"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3","Type":"ContainerStarted","Data":"c2c3f7ceb3251091431e5b58f19adab351e4c12b750a17664c260257d9d22b89"} Jan 21 16:31:40 crc kubenswrapper[4890]: I0121 16:31:40.288485 4890 generic.go:334] "Generic (PLEG): container finished" podID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerID="7c382dbc2e150631d8f5ef0f781351130cc0534256fe0ebc00d866ffc57bbbd6" exitCode=0 Jan 21 16:31:40 crc kubenswrapper[4890]: I0121 16:31:40.288575 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46dxk" event={"ID":"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3","Type":"ContainerDied","Data":"7c382dbc2e150631d8f5ef0f781351130cc0534256fe0ebc00d866ffc57bbbd6"} Jan 21 16:31:41 crc kubenswrapper[4890]: I0121 16:31:41.296942 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46dxk" event={"ID":"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3","Type":"ContainerStarted","Data":"5df511f55f77bce66473fa4d6260e973bd2b3b82c00c582c84db541caab95943"} Jan 21 16:31:41 crc kubenswrapper[4890]: I0121 16:31:41.318631 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-46dxk" podStartSLOduration=1.90795527 podStartE2EDuration="4.318612195s" podCreationTimestamp="2026-01-21 16:31:37 +0000 UTC" firstStartedPulling="2026-01-21 16:31:38.275382867 +0000 UTC m=+3580.636825276" lastFinishedPulling="2026-01-21 16:31:40.686039792 +0000 UTC m=+3583.047482201" observedRunningTime="2026-01-21 16:31:41.317698493 +0000 UTC m=+3583.679140902" watchObservedRunningTime="2026-01-21 16:31:41.318612195 +0000 UTC m=+3583.680054604" Jan 21 16:31:47 crc kubenswrapper[4890]: I0121 16:31:47.466941 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:47 crc kubenswrapper[4890]: I0121 16:31:47.467232 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:47 crc kubenswrapper[4890]: I0121 16:31:47.509286 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:48 crc kubenswrapper[4890]: I0121 16:31:48.397665 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:48 crc kubenswrapper[4890]: I0121 16:31:48.450711 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46dxk"] Jan 21 16:31:48 crc kubenswrapper[4890]: I0121 16:31:48.761766 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:31:48 crc kubenswrapper[4890]: I0121 16:31:48.761831 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:31:50 crc kubenswrapper[4890]: I0121 16:31:50.350419 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46dxk" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="registry-server" containerID="cri-o://5df511f55f77bce66473fa4d6260e973bd2b3b82c00c582c84db541caab95943" gracePeriod=2 Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.364908 4890 generic.go:334] "Generic (PLEG): container finished" podID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerID="5df511f55f77bce66473fa4d6260e973bd2b3b82c00c582c84db541caab95943" exitCode=0 Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.365001 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46dxk" event={"ID":"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3","Type":"ContainerDied","Data":"5df511f55f77bce66473fa4d6260e973bd2b3b82c00c582c84db541caab95943"} Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.717078 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.887850 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-utilities\") pod \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.888259 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-catalog-content\") pod \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.888280 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpjhh\" (UniqueName: \"kubernetes.io/projected/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-kube-api-access-lpjhh\") pod \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\" (UID: \"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3\") " Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.888627 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-utilities" (OuterVolumeSpecName: "utilities") pod "87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" (UID: "87fccfdd-5a99-426b-a3f8-9f9686e1bbb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.893679 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-kube-api-access-lpjhh" (OuterVolumeSpecName: "kube-api-access-lpjhh") pod "87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" (UID: "87fccfdd-5a99-426b-a3f8-9f9686e1bbb3"). InnerVolumeSpecName "kube-api-access-lpjhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.990227 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpjhh\" (UniqueName: \"kubernetes.io/projected/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-kube-api-access-lpjhh\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:52 crc kubenswrapper[4890]: I0121 16:31:52.990263 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.027385 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" (UID: "87fccfdd-5a99-426b-a3f8-9f9686e1bbb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.091150 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.381628 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46dxk" event={"ID":"87fccfdd-5a99-426b-a3f8-9f9686e1bbb3","Type":"ContainerDied","Data":"c2c3f7ceb3251091431e5b58f19adab351e4c12b750a17664c260257d9d22b89"} Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.381699 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46dxk" Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.381714 4890 scope.go:117] "RemoveContainer" containerID="5df511f55f77bce66473fa4d6260e973bd2b3b82c00c582c84db541caab95943" Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.408020 4890 scope.go:117] "RemoveContainer" containerID="7c382dbc2e150631d8f5ef0f781351130cc0534256fe0ebc00d866ffc57bbbd6" Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.420303 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46dxk"] Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.427201 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-46dxk"] Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.448125 4890 scope.go:117] "RemoveContainer" containerID="1fa95d8978eda9ff1f310d6924379847118cd78344ed24f912bd5dab30b15b5a" Jan 21 16:31:53 crc kubenswrapper[4890]: I0121 16:31:53.927955 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" path="/var/lib/kubelet/pods/87fccfdd-5a99-426b-a3f8-9f9686e1bbb3/volumes" Jan 21 16:32:18 crc kubenswrapper[4890]: I0121 16:32:18.761872 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:32:18 crc kubenswrapper[4890]: I0121 16:32:18.762799 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:32:18 crc kubenswrapper[4890]: I0121 16:32:18.762855 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:32:18 crc kubenswrapper[4890]: I0121 16:32:18.763516 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02fbcb98dedd37c7be52f8f98f7e0f54d9e4fa175b8070234096b1ad7c7d0957"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:32:18 crc kubenswrapper[4890]: I0121 16:32:18.763597 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://02fbcb98dedd37c7be52f8f98f7e0f54d9e4fa175b8070234096b1ad7c7d0957" gracePeriod=600 Jan 21 16:32:19 crc kubenswrapper[4890]: I0121 16:32:19.573785 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="02fbcb98dedd37c7be52f8f98f7e0f54d9e4fa175b8070234096b1ad7c7d0957" exitCode=0 Jan 21 16:32:19 crc kubenswrapper[4890]: I0121 16:32:19.573864 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"02fbcb98dedd37c7be52f8f98f7e0f54d9e4fa175b8070234096b1ad7c7d0957"} Jan 21 16:32:19 crc kubenswrapper[4890]: I0121 16:32:19.574316 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12"} Jan 21 16:32:19 crc kubenswrapper[4890]: I0121 16:32:19.574340 4890 scope.go:117] "RemoveContainer" containerID="d658bbd902c17df419e88903a4c984c9ac68f48ca7b2c35c781e607581f18feb" Jan 21 16:34:48 crc kubenswrapper[4890]: I0121 16:34:48.762711 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:34:48 crc kubenswrapper[4890]: I0121 16:34:48.763302 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:18 crc kubenswrapper[4890]: I0121 16:35:18.762646 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:18 crc kubenswrapper[4890]: I0121 16:35:18.763216 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:48 crc kubenswrapper[4890]: I0121 16:35:48.788187 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:48 crc kubenswrapper[4890]: I0121 16:35:48.788959 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:48 crc kubenswrapper[4890]: I0121 16:35:48.789011 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:35:48 crc kubenswrapper[4890]: I0121 16:35:48.789611 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:35:48 crc kubenswrapper[4890]: I0121 16:35:48.789663 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" gracePeriod=600 Jan 21 16:35:48 crc kubenswrapper[4890]: E0121 16:35:48.911707 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:35:49 crc kubenswrapper[4890]: I0121 16:35:49.064684 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" exitCode=0 Jan 21 16:35:49 crc kubenswrapper[4890]: I0121 16:35:49.064733 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12"} Jan 21 16:35:49 crc kubenswrapper[4890]: I0121 16:35:49.064768 4890 scope.go:117] "RemoveContainer" containerID="02fbcb98dedd37c7be52f8f98f7e0f54d9e4fa175b8070234096b1ad7c7d0957" Jan 21 16:35:49 crc kubenswrapper[4890]: I0121 16:35:49.065406 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:35:49 crc kubenswrapper[4890]: E0121 16:35:49.065667 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:36:02 crc kubenswrapper[4890]: I0121 16:36:02.913796 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:36:02 crc kubenswrapper[4890]: E0121 16:36:02.915752 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:36:16 crc kubenswrapper[4890]: I0121 16:36:16.914843 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:36:16 crc kubenswrapper[4890]: E0121 16:36:16.915548 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:36:31 crc kubenswrapper[4890]: I0121 16:36:31.913936 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:36:31 crc kubenswrapper[4890]: E0121 16:36:31.914802 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:36:46 crc kubenswrapper[4890]: I0121 16:36:46.915274 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:36:46 crc kubenswrapper[4890]: E0121 16:36:46.917085 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:36:57 crc kubenswrapper[4890]: I0121 16:36:57.914133 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:36:57 crc kubenswrapper[4890]: E0121 16:36:57.915442 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:37:09 crc kubenswrapper[4890]: I0121 16:37:09.914646 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:37:09 crc kubenswrapper[4890]: E0121 16:37:09.915480 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:37:22 crc kubenswrapper[4890]: I0121 16:37:22.914328 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:37:22 crc kubenswrapper[4890]: E0121 16:37:22.915203 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:37:35 crc kubenswrapper[4890]: I0121 16:37:35.914908 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:37:35 crc kubenswrapper[4890]: E0121 16:37:35.916246 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:37:49 crc kubenswrapper[4890]: I0121 16:37:49.914503 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:37:49 crc kubenswrapper[4890]: E0121 16:37:49.915297 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:38:01 crc kubenswrapper[4890]: I0121 16:38:01.913928 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:38:01 crc kubenswrapper[4890]: E0121 16:38:01.914487 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:38:12 crc kubenswrapper[4890]: I0121 16:38:12.915132 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:38:12 crc kubenswrapper[4890]: E0121 16:38:12.916040 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:38:27 crc kubenswrapper[4890]: I0121 16:38:27.918304 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:38:27 crc kubenswrapper[4890]: E0121 16:38:27.919213 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:38:42 crc kubenswrapper[4890]: I0121 16:38:42.914553 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:38:42 crc kubenswrapper[4890]: E0121 16:38:42.915309 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:38:55 crc kubenswrapper[4890]: I0121 16:38:55.915009 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:38:55 crc kubenswrapper[4890]: E0121 16:38:55.915842 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:39:07 crc kubenswrapper[4890]: I0121 16:39:07.919261 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:39:07 crc kubenswrapper[4890]: E0121 16:39:07.921302 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:39:21 crc kubenswrapper[4890]: I0121 16:39:21.914811 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:39:21 crc kubenswrapper[4890]: E0121 16:39:21.915566 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.811551 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8d4wl"] Jan 21 16:39:22 crc kubenswrapper[4890]: E0121 16:39:22.811905 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="registry-server" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.811925 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="registry-server" Jan 21 16:39:22 crc kubenswrapper[4890]: E0121 16:39:22.811955 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="extract-utilities" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.811963 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="extract-utilities" Jan 21 16:39:22 crc kubenswrapper[4890]: E0121 16:39:22.811979 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="extract-content" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.811989 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="extract-content" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.812161 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fccfdd-5a99-426b-a3f8-9f9686e1bbb3" containerName="registry-server" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.813461 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.831759 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d4wl"] Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.944875 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwd68\" (UniqueName: \"kubernetes.io/projected/e802d27f-37c9-495e-8550-6583d4c4b0b9-kube-api-access-gwd68\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.945239 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-utilities\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:22 crc kubenswrapper[4890]: I0121 16:39:22.945536 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-catalog-content\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.047309 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-utilities\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.047394 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-catalog-content\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.048132 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-catalog-content\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.048127 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-utilities\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.048243 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwd68\" (UniqueName: \"kubernetes.io/projected/e802d27f-37c9-495e-8550-6583d4c4b0b9-kube-api-access-gwd68\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.079677 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwd68\" (UniqueName: \"kubernetes.io/projected/e802d27f-37c9-495e-8550-6583d4c4b0b9-kube-api-access-gwd68\") pod \"certified-operators-8d4wl\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.134713 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:23 crc kubenswrapper[4890]: I0121 16:39:23.678504 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d4wl"] Jan 21 16:39:24 crc kubenswrapper[4890]: I0121 16:39:24.538287 4890 generic.go:334] "Generic (PLEG): container finished" podID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerID="310f04c92e975e905dd573e903347c3696e687a7fcbdf34edf67e59e967fecd6" exitCode=0 Jan 21 16:39:24 crc kubenswrapper[4890]: I0121 16:39:24.538333 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4wl" event={"ID":"e802d27f-37c9-495e-8550-6583d4c4b0b9","Type":"ContainerDied","Data":"310f04c92e975e905dd573e903347c3696e687a7fcbdf34edf67e59e967fecd6"} Jan 21 16:39:24 crc kubenswrapper[4890]: I0121 16:39:24.538596 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4wl" event={"ID":"e802d27f-37c9-495e-8550-6583d4c4b0b9","Type":"ContainerStarted","Data":"9c6def8cdc0797a36b931644372b2195e3486118646983f1323c8bab2fb15eae"} Jan 21 16:39:24 crc kubenswrapper[4890]: I0121 16:39:24.541636 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:39:25 crc kubenswrapper[4890]: I0121 16:39:25.545886 4890 generic.go:334] "Generic (PLEG): container finished" podID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerID="6a8d651de067f2570a2b1615c30835b45b46fee97dba35aefde34cc0225ee4c7" exitCode=0 Jan 21 16:39:25 crc kubenswrapper[4890]: I0121 16:39:25.545935 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4wl" event={"ID":"e802d27f-37c9-495e-8550-6583d4c4b0b9","Type":"ContainerDied","Data":"6a8d651de067f2570a2b1615c30835b45b46fee97dba35aefde34cc0225ee4c7"} Jan 21 16:39:26 crc kubenswrapper[4890]: I0121 16:39:26.556028 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4wl" event={"ID":"e802d27f-37c9-495e-8550-6583d4c4b0b9","Type":"ContainerStarted","Data":"8769d0279b1a558ac365e3dddc0c9bc6e50828918cf142611b81c6d231eb6bde"} Jan 21 16:39:26 crc kubenswrapper[4890]: I0121 16:39:26.576886 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8d4wl" podStartSLOduration=3.15698189 podStartE2EDuration="4.576868437s" podCreationTimestamp="2026-01-21 16:39:22 +0000 UTC" firstStartedPulling="2026-01-21 16:39:24.541396945 +0000 UTC m=+4046.902839354" lastFinishedPulling="2026-01-21 16:39:25.961283492 +0000 UTC m=+4048.322725901" observedRunningTime="2026-01-21 16:39:26.575029871 +0000 UTC m=+4048.936472280" watchObservedRunningTime="2026-01-21 16:39:26.576868437 +0000 UTC m=+4048.938310856" Jan 21 16:39:33 crc kubenswrapper[4890]: I0121 16:39:33.135154 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:33 crc kubenswrapper[4890]: I0121 16:39:33.136505 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:33 crc kubenswrapper[4890]: I0121 16:39:33.187780 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:33 crc kubenswrapper[4890]: I0121 16:39:33.637934 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:33 crc kubenswrapper[4890]: I0121 16:39:33.682842 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d4wl"] Jan 21 16:39:35 crc kubenswrapper[4890]: I0121 16:39:35.608877 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8d4wl" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="registry-server" containerID="cri-o://8769d0279b1a558ac365e3dddc0c9bc6e50828918cf142611b81c6d231eb6bde" gracePeriod=2 Jan 21 16:39:35 crc kubenswrapper[4890]: I0121 16:39:35.914395 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:39:35 crc kubenswrapper[4890]: E0121 16:39:35.915036 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:39:36 crc kubenswrapper[4890]: I0121 16:39:36.617917 4890 generic.go:334] "Generic (PLEG): container finished" podID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerID="8769d0279b1a558ac365e3dddc0c9bc6e50828918cf142611b81c6d231eb6bde" exitCode=0 Jan 21 16:39:36 crc kubenswrapper[4890]: I0121 16:39:36.617967 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4wl" event={"ID":"e802d27f-37c9-495e-8550-6583d4c4b0b9","Type":"ContainerDied","Data":"8769d0279b1a558ac365e3dddc0c9bc6e50828918cf142611b81c6d231eb6bde"} Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.113979 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.243678 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwd68\" (UniqueName: \"kubernetes.io/projected/e802d27f-37c9-495e-8550-6583d4c4b0b9-kube-api-access-gwd68\") pod \"e802d27f-37c9-495e-8550-6583d4c4b0b9\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.243788 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-utilities\") pod \"e802d27f-37c9-495e-8550-6583d4c4b0b9\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.243893 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-catalog-content\") pod \"e802d27f-37c9-495e-8550-6583d4c4b0b9\" (UID: \"e802d27f-37c9-495e-8550-6583d4c4b0b9\") " Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.244684 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-utilities" (OuterVolumeSpecName: "utilities") pod "e802d27f-37c9-495e-8550-6583d4c4b0b9" (UID: "e802d27f-37c9-495e-8550-6583d4c4b0b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.250553 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e802d27f-37c9-495e-8550-6583d4c4b0b9-kube-api-access-gwd68" (OuterVolumeSpecName: "kube-api-access-gwd68") pod "e802d27f-37c9-495e-8550-6583d4c4b0b9" (UID: "e802d27f-37c9-495e-8550-6583d4c4b0b9"). InnerVolumeSpecName "kube-api-access-gwd68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.288927 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e802d27f-37c9-495e-8550-6583d4c4b0b9" (UID: "e802d27f-37c9-495e-8550-6583d4c4b0b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.345098 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.345140 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwd68\" (UniqueName: \"kubernetes.io/projected/e802d27f-37c9-495e-8550-6583d4c4b0b9-kube-api-access-gwd68\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.345156 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e802d27f-37c9-495e-8550-6583d4c4b0b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.628402 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4wl" event={"ID":"e802d27f-37c9-495e-8550-6583d4c4b0b9","Type":"ContainerDied","Data":"9c6def8cdc0797a36b931644372b2195e3486118646983f1323c8bab2fb15eae"} Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.628628 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4wl" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.629258 4890 scope.go:117] "RemoveContainer" containerID="8769d0279b1a558ac365e3dddc0c9bc6e50828918cf142611b81c6d231eb6bde" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.647281 4890 scope.go:117] "RemoveContainer" containerID="6a8d651de067f2570a2b1615c30835b45b46fee97dba35aefde34cc0225ee4c7" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.658894 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d4wl"] Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.664040 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8d4wl"] Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.692224 4890 scope.go:117] "RemoveContainer" containerID="310f04c92e975e905dd573e903347c3696e687a7fcbdf34edf67e59e967fecd6" Jan 21 16:39:37 crc kubenswrapper[4890]: I0121 16:39:37.923552 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" path="/var/lib/kubelet/pods/e802d27f-37c9-495e-8550-6583d4c4b0b9/volumes" Jan 21 16:39:49 crc kubenswrapper[4890]: I0121 16:39:49.916887 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:39:49 crc kubenswrapper[4890]: E0121 16:39:49.917495 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:40:00 crc kubenswrapper[4890]: I0121 16:40:00.914460 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:40:00 crc kubenswrapper[4890]: E0121 16:40:00.915169 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:40:13 crc kubenswrapper[4890]: I0121 16:40:13.914564 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:40:13 crc kubenswrapper[4890]: E0121 16:40:13.915504 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:40:27 crc kubenswrapper[4890]: I0121 16:40:27.917699 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:40:27 crc kubenswrapper[4890]: E0121 16:40:27.918470 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:40:39 crc kubenswrapper[4890]: I0121 16:40:39.915857 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:40:39 crc kubenswrapper[4890]: E0121 16:40:39.916700 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:40:53 crc kubenswrapper[4890]: I0121 16:40:53.914928 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:40:54 crc kubenswrapper[4890]: I0121 16:40:54.127538 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"67bb4821ecd930d526dc1eae5b432050cf5f8355483b28ebb73dadcc300848fc"} Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.821014 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-29hh4"] Jan 21 16:41:19 crc kubenswrapper[4890]: E0121 16:41:19.822914 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="extract-utilities" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.822941 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="extract-utilities" Jan 21 16:41:19 crc kubenswrapper[4890]: E0121 16:41:19.822956 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="registry-server" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.822965 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="registry-server" Jan 21 16:41:19 crc kubenswrapper[4890]: E0121 16:41:19.822994 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="extract-content" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.823004 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="extract-content" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.823193 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e802d27f-37c9-495e-8550-6583d4c4b0b9" containerName="registry-server" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.824328 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.832765 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29hh4"] Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.952473 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-catalog-content\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.952818 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-utilities\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:19 crc kubenswrapper[4890]: I0121 16:41:19.953010 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zr2q\" (UniqueName: \"kubernetes.io/projected/0d421bc8-54cb-487f-bd07-3218c3dc6c35-kube-api-access-9zr2q\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.054548 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-catalog-content\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.054878 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-utilities\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.054967 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zr2q\" (UniqueName: \"kubernetes.io/projected/0d421bc8-54cb-487f-bd07-3218c3dc6c35-kube-api-access-9zr2q\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.055131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-catalog-content\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.055639 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-utilities\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.076441 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zr2q\" (UniqueName: \"kubernetes.io/projected/0d421bc8-54cb-487f-bd07-3218c3dc6c35-kube-api-access-9zr2q\") pod \"community-operators-29hh4\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.148728 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:20 crc kubenswrapper[4890]: I0121 16:41:20.647883 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29hh4"] Jan 21 16:41:21 crc kubenswrapper[4890]: I0121 16:41:21.303751 4890 generic.go:334] "Generic (PLEG): container finished" podID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerID="050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16" exitCode=0 Jan 21 16:41:21 crc kubenswrapper[4890]: I0121 16:41:21.303827 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29hh4" event={"ID":"0d421bc8-54cb-487f-bd07-3218c3dc6c35","Type":"ContainerDied","Data":"050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16"} Jan 21 16:41:21 crc kubenswrapper[4890]: I0121 16:41:21.303890 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29hh4" event={"ID":"0d421bc8-54cb-487f-bd07-3218c3dc6c35","Type":"ContainerStarted","Data":"c99b566a32a9e77e2923289ab845100a98fcb4aa288482740c2e4516b4dc14b2"} Jan 21 16:41:22 crc kubenswrapper[4890]: I0121 16:41:22.313375 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29hh4" event={"ID":"0d421bc8-54cb-487f-bd07-3218c3dc6c35","Type":"ContainerStarted","Data":"9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067"} Jan 21 16:41:23 crc kubenswrapper[4890]: I0121 16:41:23.321604 4890 generic.go:334] "Generic (PLEG): container finished" podID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerID="9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067" exitCode=0 Jan 21 16:41:23 crc kubenswrapper[4890]: I0121 16:41:23.321727 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29hh4" event={"ID":"0d421bc8-54cb-487f-bd07-3218c3dc6c35","Type":"ContainerDied","Data":"9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067"} Jan 21 16:41:24 crc kubenswrapper[4890]: I0121 16:41:24.334112 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29hh4" event={"ID":"0d421bc8-54cb-487f-bd07-3218c3dc6c35","Type":"ContainerStarted","Data":"9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197"} Jan 21 16:41:24 crc kubenswrapper[4890]: I0121 16:41:24.362472 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-29hh4" podStartSLOduration=2.946154162 podStartE2EDuration="5.362451499s" podCreationTimestamp="2026-01-21 16:41:19 +0000 UTC" firstStartedPulling="2026-01-21 16:41:21.305631403 +0000 UTC m=+4163.667073822" lastFinishedPulling="2026-01-21 16:41:23.72192875 +0000 UTC m=+4166.083371159" observedRunningTime="2026-01-21 16:41:24.355966527 +0000 UTC m=+4166.717408936" watchObservedRunningTime="2026-01-21 16:41:24.362451499 +0000 UTC m=+4166.723893908" Jan 21 16:41:30 crc kubenswrapper[4890]: I0121 16:41:30.150150 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:30 crc kubenswrapper[4890]: I0121 16:41:30.150712 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:30 crc kubenswrapper[4890]: I0121 16:41:30.190657 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:30 crc kubenswrapper[4890]: I0121 16:41:30.414618 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:30 crc kubenswrapper[4890]: I0121 16:41:30.456340 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-29hh4"] Jan 21 16:41:32 crc kubenswrapper[4890]: I0121 16:41:32.386692 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-29hh4" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="registry-server" containerID="cri-o://9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197" gracePeriod=2 Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.081453 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.259111 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zr2q\" (UniqueName: \"kubernetes.io/projected/0d421bc8-54cb-487f-bd07-3218c3dc6c35-kube-api-access-9zr2q\") pod \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.259277 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-utilities\") pod \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.259383 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-catalog-content\") pod \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\" (UID: \"0d421bc8-54cb-487f-bd07-3218c3dc6c35\") " Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.260725 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-utilities" (OuterVolumeSpecName: "utilities") pod "0d421bc8-54cb-487f-bd07-3218c3dc6c35" (UID: "0d421bc8-54cb-487f-bd07-3218c3dc6c35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.265133 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d421bc8-54cb-487f-bd07-3218c3dc6c35-kube-api-access-9zr2q" (OuterVolumeSpecName: "kube-api-access-9zr2q") pod "0d421bc8-54cb-487f-bd07-3218c3dc6c35" (UID: "0d421bc8-54cb-487f-bd07-3218c3dc6c35"). InnerVolumeSpecName "kube-api-access-9zr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.323586 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d421bc8-54cb-487f-bd07-3218c3dc6c35" (UID: "0d421bc8-54cb-487f-bd07-3218c3dc6c35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.361125 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.361162 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zr2q\" (UniqueName: \"kubernetes.io/projected/0d421bc8-54cb-487f-bd07-3218c3dc6c35-kube-api-access-9zr2q\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.361174 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d421bc8-54cb-487f-bd07-3218c3dc6c35-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.420720 4890 generic.go:334] "Generic (PLEG): container finished" podID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerID="9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197" exitCode=0 Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.420771 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29hh4" event={"ID":"0d421bc8-54cb-487f-bd07-3218c3dc6c35","Type":"ContainerDied","Data":"9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197"} Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.420802 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29hh4" event={"ID":"0d421bc8-54cb-487f-bd07-3218c3dc6c35","Type":"ContainerDied","Data":"c99b566a32a9e77e2923289ab845100a98fcb4aa288482740c2e4516b4dc14b2"} Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.420810 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29hh4" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.420819 4890 scope.go:117] "RemoveContainer" containerID="9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.440443 4890 scope.go:117] "RemoveContainer" containerID="9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.466515 4890 scope.go:117] "RemoveContainer" containerID="050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.503715 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-29hh4"] Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.503806 4890 scope.go:117] "RemoveContainer" containerID="9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197" Jan 21 16:41:34 crc kubenswrapper[4890]: E0121 16:41:34.504716 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197\": container with ID starting with 9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197 not found: ID does not exist" containerID="9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.504796 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197"} err="failed to get container status \"9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197\": rpc error: code = NotFound desc = could not find container \"9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197\": container with ID starting with 9dd7bdb7a99fc8fb2f4ebe5538fb294920b8dca4ab6596a5b515e9e961a71197 not found: ID does not exist" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.504852 4890 scope.go:117] "RemoveContainer" containerID="9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067" Jan 21 16:41:34 crc kubenswrapper[4890]: E0121 16:41:34.505514 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067\": container with ID starting with 9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067 not found: ID does not exist" containerID="9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.505576 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067"} err="failed to get container status \"9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067\": rpc error: code = NotFound desc = could not find container \"9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067\": container with ID starting with 9b0f13b2fb92062911dc49869cd3369043f435abd2b41b8354ed8720eda2c067 not found: ID does not exist" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.505619 4890 scope.go:117] "RemoveContainer" containerID="050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16" Jan 21 16:41:34 crc kubenswrapper[4890]: E0121 16:41:34.506042 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16\": container with ID starting with 050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16 not found: ID does not exist" containerID="050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.506074 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16"} err="failed to get container status \"050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16\": rpc error: code = NotFound desc = could not find container \"050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16\": container with ID starting with 050b5700330810f0dc8d71542bb9eda02f72beef5734e284451f36ddd2374d16 not found: ID does not exist" Jan 21 16:41:34 crc kubenswrapper[4890]: I0121 16:41:34.519654 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-29hh4"] Jan 21 16:41:35 crc kubenswrapper[4890]: I0121 16:41:35.922265 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" path="/var/lib/kubelet/pods/0d421bc8-54cb-487f-bd07-3218c3dc6c35/volumes" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.478173 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtl6v"] Jan 21 16:41:41 crc kubenswrapper[4890]: E0121 16:41:41.479076 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="registry-server" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.479089 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="registry-server" Jan 21 16:41:41 crc kubenswrapper[4890]: E0121 16:41:41.479110 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="extract-content" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.479117 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="extract-content" Jan 21 16:41:41 crc kubenswrapper[4890]: E0121 16:41:41.479128 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="extract-utilities" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.479135 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="extract-utilities" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.479272 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d421bc8-54cb-487f-bd07-3218c3dc6c35" containerName="registry-server" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.484686 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.505250 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtl6v"] Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.662505 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrp2\" (UniqueName: \"kubernetes.io/projected/80052aee-a306-46ff-951e-07381f9808bf-kube-api-access-rgrp2\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.662662 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-catalog-content\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.662686 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-utilities\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.764385 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-catalog-content\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.764442 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-utilities\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.764496 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgrp2\" (UniqueName: \"kubernetes.io/projected/80052aee-a306-46ff-951e-07381f9808bf-kube-api-access-rgrp2\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.764955 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-catalog-content\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.765120 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-utilities\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.785956 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgrp2\" (UniqueName: \"kubernetes.io/projected/80052aee-a306-46ff-951e-07381f9808bf-kube-api-access-rgrp2\") pod \"redhat-marketplace-rtl6v\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:41 crc kubenswrapper[4890]: I0121 16:41:41.809891 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:42 crc kubenswrapper[4890]: I0121 16:41:42.269365 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtl6v"] Jan 21 16:41:42 crc kubenswrapper[4890]: I0121 16:41:42.483087 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtl6v" event={"ID":"80052aee-a306-46ff-951e-07381f9808bf","Type":"ContainerStarted","Data":"a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b"} Jan 21 16:41:42 crc kubenswrapper[4890]: I0121 16:41:42.483465 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtl6v" event={"ID":"80052aee-a306-46ff-951e-07381f9808bf","Type":"ContainerStarted","Data":"7bfa722b975b6d73be1142ebccdd37070ac625bd8c6538de74df78cd879dabed"} Jan 21 16:41:43 crc kubenswrapper[4890]: I0121 16:41:43.499155 4890 generic.go:334] "Generic (PLEG): container finished" podID="80052aee-a306-46ff-951e-07381f9808bf" containerID="a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b" exitCode=0 Jan 21 16:41:43 crc kubenswrapper[4890]: I0121 16:41:43.499221 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtl6v" event={"ID":"80052aee-a306-46ff-951e-07381f9808bf","Type":"ContainerDied","Data":"a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b"} Jan 21 16:41:44 crc kubenswrapper[4890]: I0121 16:41:44.507415 4890 generic.go:334] "Generic (PLEG): container finished" podID="80052aee-a306-46ff-951e-07381f9808bf" containerID="8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8" exitCode=0 Jan 21 16:41:44 crc kubenswrapper[4890]: I0121 16:41:44.507473 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtl6v" event={"ID":"80052aee-a306-46ff-951e-07381f9808bf","Type":"ContainerDied","Data":"8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8"} Jan 21 16:41:45 crc kubenswrapper[4890]: I0121 16:41:45.518563 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtl6v" event={"ID":"80052aee-a306-46ff-951e-07381f9808bf","Type":"ContainerStarted","Data":"1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637"} Jan 21 16:41:45 crc kubenswrapper[4890]: I0121 16:41:45.539056 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtl6v" podStartSLOduration=3.12052046 podStartE2EDuration="4.539036623s" podCreationTimestamp="2026-01-21 16:41:41 +0000 UTC" firstStartedPulling="2026-01-21 16:41:43.501025778 +0000 UTC m=+4185.862468187" lastFinishedPulling="2026-01-21 16:41:44.919541941 +0000 UTC m=+4187.280984350" observedRunningTime="2026-01-21 16:41:45.535783642 +0000 UTC m=+4187.897226051" watchObservedRunningTime="2026-01-21 16:41:45.539036623 +0000 UTC m=+4187.900479032" Jan 21 16:41:51 crc kubenswrapper[4890]: I0121 16:41:51.810765 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:51 crc kubenswrapper[4890]: I0121 16:41:51.811407 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:51 crc kubenswrapper[4890]: I0121 16:41:51.855487 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:52 crc kubenswrapper[4890]: I0121 16:41:52.614191 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:52 crc kubenswrapper[4890]: I0121 16:41:52.667328 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtl6v"] Jan 21 16:41:54 crc kubenswrapper[4890]: I0121 16:41:54.579664 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtl6v" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="registry-server" containerID="cri-o://1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637" gracePeriod=2 Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.028500 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.155427 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-utilities\") pod \"80052aee-a306-46ff-951e-07381f9808bf\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.155508 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-catalog-content\") pod \"80052aee-a306-46ff-951e-07381f9808bf\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.155573 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgrp2\" (UniqueName: \"kubernetes.io/projected/80052aee-a306-46ff-951e-07381f9808bf-kube-api-access-rgrp2\") pod \"80052aee-a306-46ff-951e-07381f9808bf\" (UID: \"80052aee-a306-46ff-951e-07381f9808bf\") " Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.156289 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-utilities" (OuterVolumeSpecName: "utilities") pod "80052aee-a306-46ff-951e-07381f9808bf" (UID: "80052aee-a306-46ff-951e-07381f9808bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.161221 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80052aee-a306-46ff-951e-07381f9808bf-kube-api-access-rgrp2" (OuterVolumeSpecName: "kube-api-access-rgrp2") pod "80052aee-a306-46ff-951e-07381f9808bf" (UID: "80052aee-a306-46ff-951e-07381f9808bf"). InnerVolumeSpecName "kube-api-access-rgrp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.187893 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80052aee-a306-46ff-951e-07381f9808bf" (UID: "80052aee-a306-46ff-951e-07381f9808bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.257366 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgrp2\" (UniqueName: \"kubernetes.io/projected/80052aee-a306-46ff-951e-07381f9808bf-kube-api-access-rgrp2\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.257411 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.257422 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80052aee-a306-46ff-951e-07381f9808bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.588636 4890 generic.go:334] "Generic (PLEG): container finished" podID="80052aee-a306-46ff-951e-07381f9808bf" containerID="1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637" exitCode=0 Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.588681 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtl6v" event={"ID":"80052aee-a306-46ff-951e-07381f9808bf","Type":"ContainerDied","Data":"1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637"} Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.588704 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtl6v" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.588727 4890 scope.go:117] "RemoveContainer" containerID="1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.588713 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtl6v" event={"ID":"80052aee-a306-46ff-951e-07381f9808bf","Type":"ContainerDied","Data":"7bfa722b975b6d73be1142ebccdd37070ac625bd8c6538de74df78cd879dabed"} Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.612114 4890 scope.go:117] "RemoveContainer" containerID="8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.629569 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtl6v"] Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.635586 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtl6v"] Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.655600 4890 scope.go:117] "RemoveContainer" containerID="a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.668795 4890 scope.go:117] "RemoveContainer" containerID="1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637" Jan 21 16:41:55 crc kubenswrapper[4890]: E0121 16:41:55.669956 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637\": container with ID starting with 1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637 not found: ID does not exist" containerID="1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.670095 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637"} err="failed to get container status \"1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637\": rpc error: code = NotFound desc = could not find container \"1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637\": container with ID starting with 1ad05914bc9a2700a65be9440a7411ecf9c804ef6005864334d4c57b02d9f637 not found: ID does not exist" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.670176 4890 scope.go:117] "RemoveContainer" containerID="8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8" Jan 21 16:41:55 crc kubenswrapper[4890]: E0121 16:41:55.670490 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8\": container with ID starting with 8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8 not found: ID does not exist" containerID="8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.670573 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8"} err="failed to get container status \"8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8\": rpc error: code = NotFound desc = could not find container \"8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8\": container with ID starting with 8204f4beb3819a56a71ac84875af961154c25fe501ae2fe874f5342aad0f78e8 not found: ID does not exist" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.670653 4890 scope.go:117] "RemoveContainer" containerID="a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b" Jan 21 16:41:55 crc kubenswrapper[4890]: E0121 16:41:55.670979 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b\": container with ID starting with a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b not found: ID does not exist" containerID="a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.671004 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b"} err="failed to get container status \"a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b\": rpc error: code = NotFound desc = could not find container \"a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b\": container with ID starting with a1bc52a66c9ecd87990101ec9177f16dcb00d6dff6b3748750f4a10bf34dae0b not found: ID does not exist" Jan 21 16:41:55 crc kubenswrapper[4890]: I0121 16:41:55.922805 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80052aee-a306-46ff-951e-07381f9808bf" path="/var/lib/kubelet/pods/80052aee-a306-46ff-951e-07381f9808bf/volumes" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.725543 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhh27"] Jan 21 16:42:29 crc kubenswrapper[4890]: E0121 16:42:29.726482 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="registry-server" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.726498 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="registry-server" Jan 21 16:42:29 crc kubenswrapper[4890]: E0121 16:42:29.726513 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="extract-content" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.726521 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="extract-content" Jan 21 16:42:29 crc kubenswrapper[4890]: E0121 16:42:29.726531 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="extract-utilities" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.726538 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="extract-utilities" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.726722 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="80052aee-a306-46ff-951e-07381f9808bf" containerName="registry-server" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.727973 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.730364 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhh27"] Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.757519 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxpx\" (UniqueName: \"kubernetes.io/projected/c96d5707-e851-44c2-a275-1e9ecf564279-kube-api-access-mxxpx\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.757580 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-catalog-content\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.757632 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-utilities\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.858857 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-catalog-content\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.859194 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-utilities\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.859390 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxpx\" (UniqueName: \"kubernetes.io/projected/c96d5707-e851-44c2-a275-1e9ecf564279-kube-api-access-mxxpx\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.859857 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-catalog-content\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.860034 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-utilities\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:29 crc kubenswrapper[4890]: I0121 16:42:29.878224 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxpx\" (UniqueName: \"kubernetes.io/projected/c96d5707-e851-44c2-a275-1e9ecf564279-kube-api-access-mxxpx\") pod \"redhat-operators-lhh27\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:30 crc kubenswrapper[4890]: I0121 16:42:30.075234 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:30 crc kubenswrapper[4890]: I0121 16:42:30.508574 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhh27"] Jan 21 16:42:30 crc kubenswrapper[4890]: I0121 16:42:30.840081 4890 generic.go:334] "Generic (PLEG): container finished" podID="c96d5707-e851-44c2-a275-1e9ecf564279" containerID="35f5baa8b9e991c7332a2e61dbe80fb12f8d71a3be9038ad99dad2566f11e819" exitCode=0 Jan 21 16:42:30 crc kubenswrapper[4890]: I0121 16:42:30.840136 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhh27" event={"ID":"c96d5707-e851-44c2-a275-1e9ecf564279","Type":"ContainerDied","Data":"35f5baa8b9e991c7332a2e61dbe80fb12f8d71a3be9038ad99dad2566f11e819"} Jan 21 16:42:30 crc kubenswrapper[4890]: I0121 16:42:30.840469 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhh27" event={"ID":"c96d5707-e851-44c2-a275-1e9ecf564279","Type":"ContainerStarted","Data":"45c6be7794d7649fd92758652882f9b3131ed71eca8314ff69e63dcf02fad453"} Jan 21 16:42:32 crc kubenswrapper[4890]: I0121 16:42:32.854994 4890 generic.go:334] "Generic (PLEG): container finished" podID="c96d5707-e851-44c2-a275-1e9ecf564279" containerID="da7652d8b9a7f84d0e28c0ce9251635f2f4ebae92eee13a9a9c4c0724ba1cc3f" exitCode=0 Jan 21 16:42:32 crc kubenswrapper[4890]: I0121 16:42:32.855181 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhh27" event={"ID":"c96d5707-e851-44c2-a275-1e9ecf564279","Type":"ContainerDied","Data":"da7652d8b9a7f84d0e28c0ce9251635f2f4ebae92eee13a9a9c4c0724ba1cc3f"} Jan 21 16:42:33 crc kubenswrapper[4890]: I0121 16:42:33.869132 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhh27" event={"ID":"c96d5707-e851-44c2-a275-1e9ecf564279","Type":"ContainerStarted","Data":"0f78536c87433ec9c15a2a64e11c9051b2c04dfd1b4ed75c2533005e478bd22f"} Jan 21 16:42:33 crc kubenswrapper[4890]: I0121 16:42:33.888803 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhh27" podStartSLOduration=2.433001707 podStartE2EDuration="4.888781576s" podCreationTimestamp="2026-01-21 16:42:29 +0000 UTC" firstStartedPulling="2026-01-21 16:42:30.841607993 +0000 UTC m=+4233.203050402" lastFinishedPulling="2026-01-21 16:42:33.297387862 +0000 UTC m=+4235.658830271" observedRunningTime="2026-01-21 16:42:33.88452845 +0000 UTC m=+4236.245970879" watchObservedRunningTime="2026-01-21 16:42:33.888781576 +0000 UTC m=+4236.250223985" Jan 21 16:42:40 crc kubenswrapper[4890]: I0121 16:42:40.075503 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:40 crc kubenswrapper[4890]: I0121 16:42:40.077016 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:40 crc kubenswrapper[4890]: I0121 16:42:40.124183 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:40 crc kubenswrapper[4890]: I0121 16:42:40.957425 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:41 crc kubenswrapper[4890]: I0121 16:42:41.670220 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhh27"] Jan 21 16:42:42 crc kubenswrapper[4890]: I0121 16:42:42.928504 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhh27" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="registry-server" containerID="cri-o://0f78536c87433ec9c15a2a64e11c9051b2c04dfd1b4ed75c2533005e478bd22f" gracePeriod=2 Jan 21 16:42:45 crc kubenswrapper[4890]: I0121 16:42:45.950514 4890 generic.go:334] "Generic (PLEG): container finished" podID="c96d5707-e851-44c2-a275-1e9ecf564279" containerID="0f78536c87433ec9c15a2a64e11c9051b2c04dfd1b4ed75c2533005e478bd22f" exitCode=0 Jan 21 16:42:45 crc kubenswrapper[4890]: I0121 16:42:45.950581 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhh27" event={"ID":"c96d5707-e851-44c2-a275-1e9ecf564279","Type":"ContainerDied","Data":"0f78536c87433ec9c15a2a64e11c9051b2c04dfd1b4ed75c2533005e478bd22f"} Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:45.999885 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.182088 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxpx\" (UniqueName: \"kubernetes.io/projected/c96d5707-e851-44c2-a275-1e9ecf564279-kube-api-access-mxxpx\") pod \"c96d5707-e851-44c2-a275-1e9ecf564279\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.182158 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-utilities\") pod \"c96d5707-e851-44c2-a275-1e9ecf564279\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.182276 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-catalog-content\") pod \"c96d5707-e851-44c2-a275-1e9ecf564279\" (UID: \"c96d5707-e851-44c2-a275-1e9ecf564279\") " Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.183501 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-utilities" (OuterVolumeSpecName: "utilities") pod "c96d5707-e851-44c2-a275-1e9ecf564279" (UID: "c96d5707-e851-44c2-a275-1e9ecf564279"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.188586 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96d5707-e851-44c2-a275-1e9ecf564279-kube-api-access-mxxpx" (OuterVolumeSpecName: "kube-api-access-mxxpx") pod "c96d5707-e851-44c2-a275-1e9ecf564279" (UID: "c96d5707-e851-44c2-a275-1e9ecf564279"). InnerVolumeSpecName "kube-api-access-mxxpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.284097 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.284134 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxpx\" (UniqueName: \"kubernetes.io/projected/c96d5707-e851-44c2-a275-1e9ecf564279-kube-api-access-mxxpx\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.311778 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c96d5707-e851-44c2-a275-1e9ecf564279" (UID: "c96d5707-e851-44c2-a275-1e9ecf564279"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.385837 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c96d5707-e851-44c2-a275-1e9ecf564279-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.963071 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhh27" event={"ID":"c96d5707-e851-44c2-a275-1e9ecf564279","Type":"ContainerDied","Data":"45c6be7794d7649fd92758652882f9b3131ed71eca8314ff69e63dcf02fad453"} Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.963170 4890 scope.go:117] "RemoveContainer" containerID="0f78536c87433ec9c15a2a64e11c9051b2c04dfd1b4ed75c2533005e478bd22f" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.963207 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhh27" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.985606 4890 scope.go:117] "RemoveContainer" containerID="da7652d8b9a7f84d0e28c0ce9251635f2f4ebae92eee13a9a9c4c0724ba1cc3f" Jan 21 16:42:46 crc kubenswrapper[4890]: I0121 16:42:46.998263 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhh27"] Jan 21 16:42:47 crc kubenswrapper[4890]: I0121 16:42:47.003384 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhh27"] Jan 21 16:42:47 crc kubenswrapper[4890]: I0121 16:42:47.300662 4890 scope.go:117] "RemoveContainer" containerID="35f5baa8b9e991c7332a2e61dbe80fb12f8d71a3be9038ad99dad2566f11e819" Jan 21 16:42:47 crc kubenswrapper[4890]: I0121 16:42:47.938019 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" path="/var/lib/kubelet/pods/c96d5707-e851-44c2-a275-1e9ecf564279/volumes" Jan 21 16:43:18 crc kubenswrapper[4890]: I0121 16:43:18.761852 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:43:18 crc kubenswrapper[4890]: I0121 16:43:18.762548 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:43:48 crc kubenswrapper[4890]: I0121 16:43:48.762240 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:43:48 crc kubenswrapper[4890]: I0121 16:43:48.762813 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:44:18 crc kubenswrapper[4890]: I0121 16:44:18.762324 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:44:18 crc kubenswrapper[4890]: I0121 16:44:18.762771 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:44:18 crc kubenswrapper[4890]: I0121 16:44:18.762807 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:44:18 crc kubenswrapper[4890]: I0121 16:44:18.763302 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67bb4821ecd930d526dc1eae5b432050cf5f8355483b28ebb73dadcc300848fc"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:44:18 crc kubenswrapper[4890]: I0121 16:44:18.763344 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://67bb4821ecd930d526dc1eae5b432050cf5f8355483b28ebb73dadcc300848fc" gracePeriod=600 Jan 21 16:44:19 crc kubenswrapper[4890]: I0121 16:44:19.548105 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="67bb4821ecd930d526dc1eae5b432050cf5f8355483b28ebb73dadcc300848fc" exitCode=0 Jan 21 16:44:19 crc kubenswrapper[4890]: I0121 16:44:19.548321 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"67bb4821ecd930d526dc1eae5b432050cf5f8355483b28ebb73dadcc300848fc"} Jan 21 16:44:19 crc kubenswrapper[4890]: I0121 16:44:19.548835 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334"} Jan 21 16:44:19 crc kubenswrapper[4890]: I0121 16:44:19.548864 4890 scope.go:117] "RemoveContainer" containerID="d60f55ec591be2e340bc6f58250e1e273269cf70693bacdac5b2f7c8faff5f12" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.186718 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6"] Jan 21 16:45:00 crc kubenswrapper[4890]: E0121 16:45:00.187626 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.187642 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4890]: E0121 16:45:00.187658 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.187666 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4890]: E0121 16:45:00.187701 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.187709 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.187868 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96d5707-e851-44c2-a275-1e9ecf564279" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.188454 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.190411 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.190412 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.200371 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6"] Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.270295 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm98p\" (UniqueName: \"kubernetes.io/projected/1fc0f85a-930c-4300-9b6f-e45536fb511e-kube-api-access-bm98p\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.270446 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc0f85a-930c-4300-9b6f-e45536fb511e-config-volume\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.270512 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1fc0f85a-930c-4300-9b6f-e45536fb511e-secret-volume\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.372240 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc0f85a-930c-4300-9b6f-e45536fb511e-config-volume\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.372295 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1fc0f85a-930c-4300-9b6f-e45536fb511e-secret-volume\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.372520 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm98p\" (UniqueName: \"kubernetes.io/projected/1fc0f85a-930c-4300-9b6f-e45536fb511e-kube-api-access-bm98p\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.373421 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc0f85a-930c-4300-9b6f-e45536fb511e-config-volume\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.381979 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1fc0f85a-930c-4300-9b6f-e45536fb511e-secret-volume\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.387687 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm98p\" (UniqueName: \"kubernetes.io/projected/1fc0f85a-930c-4300-9b6f-e45536fb511e-kube-api-access-bm98p\") pod \"collect-profiles-29483565-kmwl6\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.506392 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:00 crc kubenswrapper[4890]: I0121 16:45:00.931227 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6"] Jan 21 16:45:01 crc kubenswrapper[4890]: I0121 16:45:01.838832 4890 generic.go:334] "Generic (PLEG): container finished" podID="1fc0f85a-930c-4300-9b6f-e45536fb511e" containerID="4dd405ce2f3c7acfa49117f003f3a1c580bb4dca76a74bda8c1492791303336f" exitCode=0 Jan 21 16:45:01 crc kubenswrapper[4890]: I0121 16:45:01.838877 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" event={"ID":"1fc0f85a-930c-4300-9b6f-e45536fb511e","Type":"ContainerDied","Data":"4dd405ce2f3c7acfa49117f003f3a1c580bb4dca76a74bda8c1492791303336f"} Jan 21 16:45:01 crc kubenswrapper[4890]: I0121 16:45:01.839151 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" event={"ID":"1fc0f85a-930c-4300-9b6f-e45536fb511e","Type":"ContainerStarted","Data":"af9948f332dced25bb626d6d1897cc3230383370dcd961c379e7eac409b3fbad"} Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.092148 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.211508 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1fc0f85a-930c-4300-9b6f-e45536fb511e-secret-volume\") pod \"1fc0f85a-930c-4300-9b6f-e45536fb511e\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.211581 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm98p\" (UniqueName: \"kubernetes.io/projected/1fc0f85a-930c-4300-9b6f-e45536fb511e-kube-api-access-bm98p\") pod \"1fc0f85a-930c-4300-9b6f-e45536fb511e\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.211661 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc0f85a-930c-4300-9b6f-e45536fb511e-config-volume\") pod \"1fc0f85a-930c-4300-9b6f-e45536fb511e\" (UID: \"1fc0f85a-930c-4300-9b6f-e45536fb511e\") " Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.212609 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fc0f85a-930c-4300-9b6f-e45536fb511e-config-volume" (OuterVolumeSpecName: "config-volume") pod "1fc0f85a-930c-4300-9b6f-e45536fb511e" (UID: "1fc0f85a-930c-4300-9b6f-e45536fb511e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.216816 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc0f85a-930c-4300-9b6f-e45536fb511e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1fc0f85a-930c-4300-9b6f-e45536fb511e" (UID: "1fc0f85a-930c-4300-9b6f-e45536fb511e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.217096 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc0f85a-930c-4300-9b6f-e45536fb511e-kube-api-access-bm98p" (OuterVolumeSpecName: "kube-api-access-bm98p") pod "1fc0f85a-930c-4300-9b6f-e45536fb511e" (UID: "1fc0f85a-930c-4300-9b6f-e45536fb511e"). InnerVolumeSpecName "kube-api-access-bm98p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.313494 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc0f85a-930c-4300-9b6f-e45536fb511e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.313549 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1fc0f85a-930c-4300-9b6f-e45536fb511e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.313560 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm98p\" (UniqueName: \"kubernetes.io/projected/1fc0f85a-930c-4300-9b6f-e45536fb511e-kube-api-access-bm98p\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.852835 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" event={"ID":"1fc0f85a-930c-4300-9b6f-e45536fb511e","Type":"ContainerDied","Data":"af9948f332dced25bb626d6d1897cc3230383370dcd961c379e7eac409b3fbad"} Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.852873 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9948f332dced25bb626d6d1897cc3230383370dcd961c379e7eac409b3fbad" Jan 21 16:45:03 crc kubenswrapper[4890]: I0121 16:45:03.852877 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6" Jan 21 16:45:04 crc kubenswrapper[4890]: I0121 16:45:04.161229 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx"] Jan 21 16:45:04 crc kubenswrapper[4890]: I0121 16:45:04.166942 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-bjqvx"] Jan 21 16:45:05 crc kubenswrapper[4890]: I0121 16:45:05.931268 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae" path="/var/lib/kubelet/pods/f02c46ca-15c3-4f0e-8c56-2db9a4ea44ae/volumes" Jan 21 16:45:18 crc kubenswrapper[4890]: I0121 16:45:18.195437 4890 scope.go:117] "RemoveContainer" containerID="ae10d6a2bde58e1fa209819f6160516f26a9ff0a5b2330688735e0d91cfe4cbc" Jan 21 16:46:48 crc kubenswrapper[4890]: I0121 16:46:48.763457 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:46:48 crc kubenswrapper[4890]: I0121 16:46:48.764053 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:18 crc kubenswrapper[4890]: I0121 16:47:18.762420 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:47:18 crc kubenswrapper[4890]: I0121 16:47:18.763136 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.762284 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.762871 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.762915 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.763584 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.763642 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" gracePeriod=600 Jan 21 16:47:48 crc kubenswrapper[4890]: E0121 16:47:48.889469 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.928787 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" exitCode=0 Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.928845 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334"} Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.928891 4890 scope.go:117] "RemoveContainer" containerID="67bb4821ecd930d526dc1eae5b432050cf5f8355483b28ebb73dadcc300848fc" Jan 21 16:47:48 crc kubenswrapper[4890]: I0121 16:47:48.929466 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:47:48 crc kubenswrapper[4890]: E0121 16:47:48.929746 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:47:59 crc kubenswrapper[4890]: I0121 16:47:59.915206 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:47:59 crc kubenswrapper[4890]: E0121 16:47:59.915985 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:48:12 crc kubenswrapper[4890]: I0121 16:48:12.914117 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:48:12 crc kubenswrapper[4890]: E0121 16:48:12.914845 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:48:23 crc kubenswrapper[4890]: I0121 16:48:23.914377 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:48:23 crc kubenswrapper[4890]: E0121 16:48:23.915104 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:48:37 crc kubenswrapper[4890]: I0121 16:48:37.918025 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:48:37 crc kubenswrapper[4890]: E0121 16:48:37.918798 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:48:48 crc kubenswrapper[4890]: I0121 16:48:48.914112 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:48:48 crc kubenswrapper[4890]: E0121 16:48:48.914888 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:49:03 crc kubenswrapper[4890]: I0121 16:49:03.914246 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:49:03 crc kubenswrapper[4890]: E0121 16:49:03.915073 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:49:15 crc kubenswrapper[4890]: I0121 16:49:15.914668 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:49:15 crc kubenswrapper[4890]: E0121 16:49:15.915375 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:49:26 crc kubenswrapper[4890]: I0121 16:49:26.914256 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:49:26 crc kubenswrapper[4890]: E0121 16:49:26.914942 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:49:38 crc kubenswrapper[4890]: I0121 16:49:38.914030 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:49:38 crc kubenswrapper[4890]: E0121 16:49:38.914811 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:49:51 crc kubenswrapper[4890]: I0121 16:49:51.914876 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:49:51 crc kubenswrapper[4890]: E0121 16:49:51.915718 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:50:02 crc kubenswrapper[4890]: I0121 16:50:02.914809 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:50:02 crc kubenswrapper[4890]: E0121 16:50:02.915639 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.179109 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zpg46"] Jan 21 16:50:03 crc kubenswrapper[4890]: E0121 16:50:03.179624 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc0f85a-930c-4300-9b6f-e45536fb511e" containerName="collect-profiles" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.179646 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc0f85a-930c-4300-9b6f-e45536fb511e" containerName="collect-profiles" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.179861 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc0f85a-930c-4300-9b6f-e45536fb511e" containerName="collect-profiles" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.181770 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.190761 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpg46"] Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.315466 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-catalog-content\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.315799 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfnt\" (UniqueName: \"kubernetes.io/projected/82a46ce3-2a09-49f7-baf4-182aca141c3b-kube-api-access-8wfnt\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.315867 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-utilities\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.417568 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfnt\" (UniqueName: \"kubernetes.io/projected/82a46ce3-2a09-49f7-baf4-182aca141c3b-kube-api-access-8wfnt\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.417617 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-utilities\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.417671 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-catalog-content\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.418142 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-catalog-content\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.418659 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-utilities\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.439671 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfnt\" (UniqueName: \"kubernetes.io/projected/82a46ce3-2a09-49f7-baf4-182aca141c3b-kube-api-access-8wfnt\") pod \"certified-operators-zpg46\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.506037 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:03 crc kubenswrapper[4890]: I0121 16:50:03.961637 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpg46"] Jan 21 16:50:04 crc kubenswrapper[4890]: I0121 16:50:04.820651 4890 generic.go:334] "Generic (PLEG): container finished" podID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerID="6ef4930c3bbc6791bbe6f2d8b926468e3c8cb79f243c34841bfe17433309e653" exitCode=0 Jan 21 16:50:04 crc kubenswrapper[4890]: I0121 16:50:04.820705 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpg46" event={"ID":"82a46ce3-2a09-49f7-baf4-182aca141c3b","Type":"ContainerDied","Data":"6ef4930c3bbc6791bbe6f2d8b926468e3c8cb79f243c34841bfe17433309e653"} Jan 21 16:50:04 crc kubenswrapper[4890]: I0121 16:50:04.820897 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpg46" event={"ID":"82a46ce3-2a09-49f7-baf4-182aca141c3b","Type":"ContainerStarted","Data":"2d8a853313612832f67761d8531bf0eee8e4cd4dfdf910830df2e7db9fb6f6b3"} Jan 21 16:50:04 crc kubenswrapper[4890]: I0121 16:50:04.822306 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:50:06 crc kubenswrapper[4890]: I0121 16:50:06.840273 4890 generic.go:334] "Generic (PLEG): container finished" podID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerID="929d9935e329cc1f440ebeffbbf5b3e9003ba4ed1b1b691420666f9f9bd81bb0" exitCode=0 Jan 21 16:50:06 crc kubenswrapper[4890]: I0121 16:50:06.840380 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpg46" event={"ID":"82a46ce3-2a09-49f7-baf4-182aca141c3b","Type":"ContainerDied","Data":"929d9935e329cc1f440ebeffbbf5b3e9003ba4ed1b1b691420666f9f9bd81bb0"} Jan 21 16:50:07 crc kubenswrapper[4890]: I0121 16:50:07.850915 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpg46" event={"ID":"82a46ce3-2a09-49f7-baf4-182aca141c3b","Type":"ContainerStarted","Data":"b0b802e64d730935798bbf42858ea87f53125e70fcbd5db5afaa1d378ac9265b"} Jan 21 16:50:13 crc kubenswrapper[4890]: I0121 16:50:13.506507 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:13 crc kubenswrapper[4890]: I0121 16:50:13.506887 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:13 crc kubenswrapper[4890]: I0121 16:50:13.546708 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:13 crc kubenswrapper[4890]: I0121 16:50:13.563130 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zpg46" podStartSLOduration=8.048053772 podStartE2EDuration="10.563109973s" podCreationTimestamp="2026-01-21 16:50:03 +0000 UTC" firstStartedPulling="2026-01-21 16:50:04.822009337 +0000 UTC m=+4687.183451746" lastFinishedPulling="2026-01-21 16:50:07.337065538 +0000 UTC m=+4689.698507947" observedRunningTime="2026-01-21 16:50:07.875915443 +0000 UTC m=+4690.237357862" watchObservedRunningTime="2026-01-21 16:50:13.563109973 +0000 UTC m=+4695.924552372" Jan 21 16:50:13 crc kubenswrapper[4890]: I0121 16:50:13.936144 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:13 crc kubenswrapper[4890]: I0121 16:50:13.990255 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpg46"] Jan 21 16:50:15 crc kubenswrapper[4890]: I0121 16:50:15.906180 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zpg46" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="registry-server" containerID="cri-o://b0b802e64d730935798bbf42858ea87f53125e70fcbd5db5afaa1d378ac9265b" gracePeriod=2 Jan 21 16:50:15 crc kubenswrapper[4890]: I0121 16:50:15.914106 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:50:15 crc kubenswrapper[4890]: E0121 16:50:15.914362 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:50:16 crc kubenswrapper[4890]: I0121 16:50:16.921666 4890 generic.go:334] "Generic (PLEG): container finished" podID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerID="b0b802e64d730935798bbf42858ea87f53125e70fcbd5db5afaa1d378ac9265b" exitCode=0 Jan 21 16:50:16 crc kubenswrapper[4890]: I0121 16:50:16.921741 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpg46" event={"ID":"82a46ce3-2a09-49f7-baf4-182aca141c3b","Type":"ContainerDied","Data":"b0b802e64d730935798bbf42858ea87f53125e70fcbd5db5afaa1d378ac9265b"} Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.311205 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.393990 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wfnt\" (UniqueName: \"kubernetes.io/projected/82a46ce3-2a09-49f7-baf4-182aca141c3b-kube-api-access-8wfnt\") pod \"82a46ce3-2a09-49f7-baf4-182aca141c3b\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.394037 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-catalog-content\") pod \"82a46ce3-2a09-49f7-baf4-182aca141c3b\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.394105 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-utilities\") pod \"82a46ce3-2a09-49f7-baf4-182aca141c3b\" (UID: \"82a46ce3-2a09-49f7-baf4-182aca141c3b\") " Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.394967 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-utilities" (OuterVolumeSpecName: "utilities") pod "82a46ce3-2a09-49f7-baf4-182aca141c3b" (UID: "82a46ce3-2a09-49f7-baf4-182aca141c3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.400111 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a46ce3-2a09-49f7-baf4-182aca141c3b-kube-api-access-8wfnt" (OuterVolumeSpecName: "kube-api-access-8wfnt") pod "82a46ce3-2a09-49f7-baf4-182aca141c3b" (UID: "82a46ce3-2a09-49f7-baf4-182aca141c3b"). InnerVolumeSpecName "kube-api-access-8wfnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.446296 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82a46ce3-2a09-49f7-baf4-182aca141c3b" (UID: "82a46ce3-2a09-49f7-baf4-182aca141c3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.495882 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.495921 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wfnt\" (UniqueName: \"kubernetes.io/projected/82a46ce3-2a09-49f7-baf4-182aca141c3b-kube-api-access-8wfnt\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.495932 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a46ce3-2a09-49f7-baf4-182aca141c3b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.929819 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpg46" event={"ID":"82a46ce3-2a09-49f7-baf4-182aca141c3b","Type":"ContainerDied","Data":"2d8a853313612832f67761d8531bf0eee8e4cd4dfdf910830df2e7db9fb6f6b3"} Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.929875 4890 scope.go:117] "RemoveContainer" containerID="b0b802e64d730935798bbf42858ea87f53125e70fcbd5db5afaa1d378ac9265b" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.929887 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpg46" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.965228 4890 scope.go:117] "RemoveContainer" containerID="929d9935e329cc1f440ebeffbbf5b3e9003ba4ed1b1b691420666f9f9bd81bb0" Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.969895 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpg46"] Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.978657 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zpg46"] Jan 21 16:50:17 crc kubenswrapper[4890]: I0121 16:50:17.987275 4890 scope.go:117] "RemoveContainer" containerID="6ef4930c3bbc6791bbe6f2d8b926468e3c8cb79f243c34841bfe17433309e653" Jan 21 16:50:19 crc kubenswrapper[4890]: I0121 16:50:19.923445 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" path="/var/lib/kubelet/pods/82a46ce3-2a09-49f7-baf4-182aca141c3b/volumes" Jan 21 16:50:27 crc kubenswrapper[4890]: I0121 16:50:27.917945 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:50:27 crc kubenswrapper[4890]: E0121 16:50:27.919019 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:50:42 crc kubenswrapper[4890]: I0121 16:50:42.914127 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:50:42 crc kubenswrapper[4890]: E0121 16:50:42.914944 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:50:57 crc kubenswrapper[4890]: I0121 16:50:57.917701 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:50:57 crc kubenswrapper[4890]: E0121 16:50:57.918567 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:51:12 crc kubenswrapper[4890]: I0121 16:51:12.914341 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:51:12 crc kubenswrapper[4890]: E0121 16:51:12.915139 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:51:23 crc kubenswrapper[4890]: I0121 16:51:23.914043 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:51:23 crc kubenswrapper[4890]: E0121 16:51:23.914698 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.804203 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xplq"] Jan 21 16:51:26 crc kubenswrapper[4890]: E0121 16:51:26.804914 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="extract-content" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.804928 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="extract-content" Jan 21 16:51:26 crc kubenswrapper[4890]: E0121 16:51:26.804942 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="extract-utilities" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.804950 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="extract-utilities" Jan 21 16:51:26 crc kubenswrapper[4890]: E0121 16:51:26.804980 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="registry-server" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.804988 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="registry-server" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.805159 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a46ce3-2a09-49f7-baf4-182aca141c3b" containerName="registry-server" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.806419 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.832581 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xplq"] Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.847854 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-utilities\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.847907 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnlg\" (UniqueName: \"kubernetes.io/projected/55350ff8-ae40-4b05-8af3-3eb174a210c3-kube-api-access-lpnlg\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.848061 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-catalog-content\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.949474 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-utilities\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.949530 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnlg\" (UniqueName: \"kubernetes.io/projected/55350ff8-ae40-4b05-8af3-3eb174a210c3-kube-api-access-lpnlg\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.949583 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-catalog-content\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.950060 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-utilities\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.950117 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-catalog-content\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:26 crc kubenswrapper[4890]: I0121 16:51:26.995367 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnlg\" (UniqueName: \"kubernetes.io/projected/55350ff8-ae40-4b05-8af3-3eb174a210c3-kube-api-access-lpnlg\") pod \"community-operators-8xplq\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:27 crc kubenswrapper[4890]: I0121 16:51:27.123385 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:27 crc kubenswrapper[4890]: I0121 16:51:27.594723 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xplq"] Jan 21 16:51:28 crc kubenswrapper[4890]: I0121 16:51:28.401094 4890 generic.go:334] "Generic (PLEG): container finished" podID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerID="6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff" exitCode=0 Jan 21 16:51:28 crc kubenswrapper[4890]: I0121 16:51:28.401147 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xplq" event={"ID":"55350ff8-ae40-4b05-8af3-3eb174a210c3","Type":"ContainerDied","Data":"6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff"} Jan 21 16:51:28 crc kubenswrapper[4890]: I0121 16:51:28.401180 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xplq" event={"ID":"55350ff8-ae40-4b05-8af3-3eb174a210c3","Type":"ContainerStarted","Data":"c86890310ebcdcb7a422807bb47ec5b7f62e391e7dd28f05b4ee748050bdfdb2"} Jan 21 16:51:30 crc kubenswrapper[4890]: I0121 16:51:30.417079 4890 generic.go:334] "Generic (PLEG): container finished" podID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerID="2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db" exitCode=0 Jan 21 16:51:30 crc kubenswrapper[4890]: I0121 16:51:30.417210 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xplq" event={"ID":"55350ff8-ae40-4b05-8af3-3eb174a210c3","Type":"ContainerDied","Data":"2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db"} Jan 21 16:51:31 crc kubenswrapper[4890]: I0121 16:51:31.424386 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xplq" event={"ID":"55350ff8-ae40-4b05-8af3-3eb174a210c3","Type":"ContainerStarted","Data":"209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a"} Jan 21 16:51:31 crc kubenswrapper[4890]: I0121 16:51:31.439775 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xplq" podStartSLOduration=2.910999904 podStartE2EDuration="5.439754916s" podCreationTimestamp="2026-01-21 16:51:26 +0000 UTC" firstStartedPulling="2026-01-21 16:51:28.403124163 +0000 UTC m=+4770.764566572" lastFinishedPulling="2026-01-21 16:51:30.931879175 +0000 UTC m=+4773.293321584" observedRunningTime="2026-01-21 16:51:31.437507551 +0000 UTC m=+4773.798949960" watchObservedRunningTime="2026-01-21 16:51:31.439754916 +0000 UTC m=+4773.801197325" Jan 21 16:51:36 crc kubenswrapper[4890]: I0121 16:51:36.914077 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:51:36 crc kubenswrapper[4890]: E0121 16:51:36.914923 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:51:37 crc kubenswrapper[4890]: I0121 16:51:37.125275 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:37 crc kubenswrapper[4890]: I0121 16:51:37.125432 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:37 crc kubenswrapper[4890]: I0121 16:51:37.163520 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:37 crc kubenswrapper[4890]: I0121 16:51:37.493568 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:37 crc kubenswrapper[4890]: I0121 16:51:37.536765 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xplq"] Jan 21 16:51:39 crc kubenswrapper[4890]: I0121 16:51:39.469594 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8xplq" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="registry-server" containerID="cri-o://209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a" gracePeriod=2 Jan 21 16:51:39 crc kubenswrapper[4890]: I0121 16:51:39.928511 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.118752 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpnlg\" (UniqueName: \"kubernetes.io/projected/55350ff8-ae40-4b05-8af3-3eb174a210c3-kube-api-access-lpnlg\") pod \"55350ff8-ae40-4b05-8af3-3eb174a210c3\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.118876 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-utilities\") pod \"55350ff8-ae40-4b05-8af3-3eb174a210c3\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.118896 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-catalog-content\") pod \"55350ff8-ae40-4b05-8af3-3eb174a210c3\" (UID: \"55350ff8-ae40-4b05-8af3-3eb174a210c3\") " Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.119713 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-utilities" (OuterVolumeSpecName: "utilities") pod "55350ff8-ae40-4b05-8af3-3eb174a210c3" (UID: "55350ff8-ae40-4b05-8af3-3eb174a210c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.123733 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55350ff8-ae40-4b05-8af3-3eb174a210c3-kube-api-access-lpnlg" (OuterVolumeSpecName: "kube-api-access-lpnlg") pod "55350ff8-ae40-4b05-8af3-3eb174a210c3" (UID: "55350ff8-ae40-4b05-8af3-3eb174a210c3"). InnerVolumeSpecName "kube-api-access-lpnlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.220572 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpnlg\" (UniqueName: \"kubernetes.io/projected/55350ff8-ae40-4b05-8af3-3eb174a210c3-kube-api-access-lpnlg\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.220606 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.263018 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55350ff8-ae40-4b05-8af3-3eb174a210c3" (UID: "55350ff8-ae40-4b05-8af3-3eb174a210c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.322316 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55350ff8-ae40-4b05-8af3-3eb174a210c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.478542 4890 generic.go:334] "Generic (PLEG): container finished" podID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerID="209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a" exitCode=0 Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.478593 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xplq" event={"ID":"55350ff8-ae40-4b05-8af3-3eb174a210c3","Type":"ContainerDied","Data":"209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a"} Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.478624 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xplq" event={"ID":"55350ff8-ae40-4b05-8af3-3eb174a210c3","Type":"ContainerDied","Data":"c86890310ebcdcb7a422807bb47ec5b7f62e391e7dd28f05b4ee748050bdfdb2"} Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.478643 4890 scope.go:117] "RemoveContainer" containerID="209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.478793 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xplq" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.497878 4890 scope.go:117] "RemoveContainer" containerID="2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.518904 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xplq"] Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.525787 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8xplq"] Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.532996 4890 scope.go:117] "RemoveContainer" containerID="6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.551277 4890 scope.go:117] "RemoveContainer" containerID="209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a" Jan 21 16:51:40 crc kubenswrapper[4890]: E0121 16:51:40.552057 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a\": container with ID starting with 209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a not found: ID does not exist" containerID="209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.552213 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a"} err="failed to get container status \"209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a\": rpc error: code = NotFound desc = could not find container \"209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a\": container with ID starting with 209f5d68ea0ee5d5c3db7a7b0f5e41ad4185b2306bcb4f0a383a275af072cd3a not found: ID does not exist" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.552329 4890 scope.go:117] "RemoveContainer" containerID="2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db" Jan 21 16:51:40 crc kubenswrapper[4890]: E0121 16:51:40.552712 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db\": container with ID starting with 2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db not found: ID does not exist" containerID="2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.552745 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db"} err="failed to get container status \"2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db\": rpc error: code = NotFound desc = could not find container \"2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db\": container with ID starting with 2d8a8012c20134c34459f56c184aebe50ee5d60b5cf69e220a6e3a0c7cf2c4db not found: ID does not exist" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.552768 4890 scope.go:117] "RemoveContainer" containerID="6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff" Jan 21 16:51:40 crc kubenswrapper[4890]: E0121 16:51:40.553025 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff\": container with ID starting with 6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff not found: ID does not exist" containerID="6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff" Jan 21 16:51:40 crc kubenswrapper[4890]: I0121 16:51:40.553062 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff"} err="failed to get container status \"6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff\": rpc error: code = NotFound desc = could not find container \"6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff\": container with ID starting with 6faf4393aa37be4438d8871f42caa5087c574a3e49cf74bf9b81015438c2bbff not found: ID does not exist" Jan 21 16:51:41 crc kubenswrapper[4890]: I0121 16:51:41.924158 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" path="/var/lib/kubelet/pods/55350ff8-ae40-4b05-8af3-3eb174a210c3/volumes" Jan 21 16:51:49 crc kubenswrapper[4890]: I0121 16:51:49.914500 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:51:49 crc kubenswrapper[4890]: E0121 16:51:49.915048 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:52:02 crc kubenswrapper[4890]: I0121 16:52:02.914182 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:52:02 crc kubenswrapper[4890]: E0121 16:52:02.914822 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:52:17 crc kubenswrapper[4890]: I0121 16:52:17.917861 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:52:17 crc kubenswrapper[4890]: E0121 16:52:17.918477 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:52:28 crc kubenswrapper[4890]: I0121 16:52:28.914625 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:52:28 crc kubenswrapper[4890]: E0121 16:52:28.915385 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:52:42 crc kubenswrapper[4890]: I0121 16:52:42.913917 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:52:42 crc kubenswrapper[4890]: E0121 16:52:42.914644 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:52:55 crc kubenswrapper[4890]: I0121 16:52:55.682158 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:52:56 crc kubenswrapper[4890]: I0121 16:52:56.695158 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"142c6fbaaaf0c0988b80a5cda216027a830094babae157afa2e11ed6dc30d815"} Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.161933 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-wqd2s"] Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.166932 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-wqd2s"] Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.306872 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-gpffk"] Jan 21 16:54:45 crc kubenswrapper[4890]: E0121 16:54:45.307825 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="extract-content" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.307920 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="extract-content" Jan 21 16:54:45 crc kubenswrapper[4890]: E0121 16:54:45.308009 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="extract-utilities" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.308076 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="extract-utilities" Jan 21 16:54:45 crc kubenswrapper[4890]: E0121 16:54:45.308139 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="registry-server" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.308237 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="registry-server" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.308493 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="55350ff8-ae40-4b05-8af3-3eb174a210c3" containerName="registry-server" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.309184 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.312001 4890 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-l5sgg" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.312392 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.314420 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.317541 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.318736 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gpffk"] Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.348517 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41004b80-f00c-412f-9ae4-b2e393d5687c-node-mnt\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.348646 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktb2t\" (UniqueName: \"kubernetes.io/projected/41004b80-f00c-412f-9ae4-b2e393d5687c-kube-api-access-ktb2t\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.348691 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41004b80-f00c-412f-9ae4-b2e393d5687c-crc-storage\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.450015 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41004b80-f00c-412f-9ae4-b2e393d5687c-crc-storage\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.450391 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41004b80-f00c-412f-9ae4-b2e393d5687c-node-mnt\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.450542 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktb2t\" (UniqueName: \"kubernetes.io/projected/41004b80-f00c-412f-9ae4-b2e393d5687c-kube-api-access-ktb2t\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.450763 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41004b80-f00c-412f-9ae4-b2e393d5687c-node-mnt\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.451114 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41004b80-f00c-412f-9ae4-b2e393d5687c-crc-storage\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.473279 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktb2t\" (UniqueName: \"kubernetes.io/projected/41004b80-f00c-412f-9ae4-b2e393d5687c-kube-api-access-ktb2t\") pod \"crc-storage-crc-gpffk\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.630636 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:45 crc kubenswrapper[4890]: I0121 16:54:45.924766 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb" path="/var/lib/kubelet/pods/9a0a4f22-8578-4231-aa4d-e14a7ffa4cdb/volumes" Jan 21 16:54:46 crc kubenswrapper[4890]: I0121 16:54:46.043458 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gpffk"] Jan 21 16:54:46 crc kubenswrapper[4890]: I0121 16:54:46.427484 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gpffk" event={"ID":"41004b80-f00c-412f-9ae4-b2e393d5687c","Type":"ContainerStarted","Data":"6c4883dfa0228b0bd5cd215fec296ed0fb7115b5fc4f80eeffe186eeda57ffd0"} Jan 21 16:54:47 crc kubenswrapper[4890]: I0121 16:54:47.434706 4890 generic.go:334] "Generic (PLEG): container finished" podID="41004b80-f00c-412f-9ae4-b2e393d5687c" containerID="db61e0bc92dae4b4bcebc38b96b466c861295bf121b78b0b95c56980ef0cf04e" exitCode=0 Jan 21 16:54:47 crc kubenswrapper[4890]: I0121 16:54:47.434766 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gpffk" event={"ID":"41004b80-f00c-412f-9ae4-b2e393d5687c","Type":"ContainerDied","Data":"db61e0bc92dae4b4bcebc38b96b466c861295bf121b78b0b95c56980ef0cf04e"} Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.697332 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.801371 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41004b80-f00c-412f-9ae4-b2e393d5687c-crc-storage\") pod \"41004b80-f00c-412f-9ae4-b2e393d5687c\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.801538 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41004b80-f00c-412f-9ae4-b2e393d5687c-node-mnt\") pod \"41004b80-f00c-412f-9ae4-b2e393d5687c\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.801579 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktb2t\" (UniqueName: \"kubernetes.io/projected/41004b80-f00c-412f-9ae4-b2e393d5687c-kube-api-access-ktb2t\") pod \"41004b80-f00c-412f-9ae4-b2e393d5687c\" (UID: \"41004b80-f00c-412f-9ae4-b2e393d5687c\") " Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.801760 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41004b80-f00c-412f-9ae4-b2e393d5687c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "41004b80-f00c-412f-9ae4-b2e393d5687c" (UID: "41004b80-f00c-412f-9ae4-b2e393d5687c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.801841 4890 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41004b80-f00c-412f-9ae4-b2e393d5687c-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.808538 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41004b80-f00c-412f-9ae4-b2e393d5687c-kube-api-access-ktb2t" (OuterVolumeSpecName: "kube-api-access-ktb2t") pod "41004b80-f00c-412f-9ae4-b2e393d5687c" (UID: "41004b80-f00c-412f-9ae4-b2e393d5687c"). InnerVolumeSpecName "kube-api-access-ktb2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.820989 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41004b80-f00c-412f-9ae4-b2e393d5687c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "41004b80-f00c-412f-9ae4-b2e393d5687c" (UID: "41004b80-f00c-412f-9ae4-b2e393d5687c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.903471 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktb2t\" (UniqueName: \"kubernetes.io/projected/41004b80-f00c-412f-9ae4-b2e393d5687c-kube-api-access-ktb2t\") on node \"crc\" DevicePath \"\"" Jan 21 16:54:48 crc kubenswrapper[4890]: I0121 16:54:48.903519 4890 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41004b80-f00c-412f-9ae4-b2e393d5687c-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 16:54:49 crc kubenswrapper[4890]: I0121 16:54:49.448953 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gpffk" event={"ID":"41004b80-f00c-412f-9ae4-b2e393d5687c","Type":"ContainerDied","Data":"6c4883dfa0228b0bd5cd215fec296ed0fb7115b5fc4f80eeffe186eeda57ffd0"} Jan 21 16:54:49 crc kubenswrapper[4890]: I0121 16:54:49.449312 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4883dfa0228b0bd5cd215fec296ed0fb7115b5fc4f80eeffe186eeda57ffd0" Jan 21 16:54:49 crc kubenswrapper[4890]: I0121 16:54:49.448990 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gpffk" Jan 21 16:54:50 crc kubenswrapper[4890]: I0121 16:54:50.949331 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-gpffk"] Jan 21 16:54:50 crc kubenswrapper[4890]: I0121 16:54:50.955136 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-gpffk"] Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.088342 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-n77xm"] Jan 21 16:54:51 crc kubenswrapper[4890]: E0121 16:54:51.088697 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41004b80-f00c-412f-9ae4-b2e393d5687c" containerName="storage" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.088727 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="41004b80-f00c-412f-9ae4-b2e393d5687c" containerName="storage" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.089010 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="41004b80-f00c-412f-9ae4-b2e393d5687c" containerName="storage" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.089673 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.093105 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.093312 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.093927 4890 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-l5sgg" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.094091 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.094977 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-n77xm"] Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.132110 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a7e3376-0f38-4c6a-8263-56a762a32a7b-node-mnt\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.132175 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5zzm\" (UniqueName: \"kubernetes.io/projected/4a7e3376-0f38-4c6a-8263-56a762a32a7b-kube-api-access-x5zzm\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.132210 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a7e3376-0f38-4c6a-8263-56a762a32a7b-crc-storage\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.233999 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a7e3376-0f38-4c6a-8263-56a762a32a7b-node-mnt\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.234423 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5zzm\" (UniqueName: \"kubernetes.io/projected/4a7e3376-0f38-4c6a-8263-56a762a32a7b-kube-api-access-x5zzm\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.234473 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a7e3376-0f38-4c6a-8263-56a762a32a7b-crc-storage\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.234316 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a7e3376-0f38-4c6a-8263-56a762a32a7b-node-mnt\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.235456 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a7e3376-0f38-4c6a-8263-56a762a32a7b-crc-storage\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.259019 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5zzm\" (UniqueName: \"kubernetes.io/projected/4a7e3376-0f38-4c6a-8263-56a762a32a7b-kube-api-access-x5zzm\") pod \"crc-storage-crc-n77xm\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.409041 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.874145 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-n77xm"] Jan 21 16:54:51 crc kubenswrapper[4890]: I0121 16:54:51.921717 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41004b80-f00c-412f-9ae4-b2e393d5687c" path="/var/lib/kubelet/pods/41004b80-f00c-412f-9ae4-b2e393d5687c/volumes" Jan 21 16:54:52 crc kubenswrapper[4890]: I0121 16:54:52.472447 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-n77xm" event={"ID":"4a7e3376-0f38-4c6a-8263-56a762a32a7b","Type":"ContainerStarted","Data":"02ccca577eb45c8ec16d5969f6671260a72a925a22fd9b4e7578ef09106acc1a"} Jan 21 16:54:53 crc kubenswrapper[4890]: I0121 16:54:53.479967 4890 generic.go:334] "Generic (PLEG): container finished" podID="4a7e3376-0f38-4c6a-8263-56a762a32a7b" containerID="f8b9207380cff6a256514c66fc7d4756736a2ea8cece0a82e7a891722007c477" exitCode=0 Jan 21 16:54:53 crc kubenswrapper[4890]: I0121 16:54:53.480034 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-n77xm" event={"ID":"4a7e3376-0f38-4c6a-8263-56a762a32a7b","Type":"ContainerDied","Data":"f8b9207380cff6a256514c66fc7d4756736a2ea8cece0a82e7a891722007c477"} Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.741377 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.884965 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a7e3376-0f38-4c6a-8263-56a762a32a7b-crc-storage\") pod \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.885108 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5zzm\" (UniqueName: \"kubernetes.io/projected/4a7e3376-0f38-4c6a-8263-56a762a32a7b-kube-api-access-x5zzm\") pod \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.885133 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a7e3376-0f38-4c6a-8263-56a762a32a7b-node-mnt\") pod \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\" (UID: \"4a7e3376-0f38-4c6a-8263-56a762a32a7b\") " Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.885340 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a7e3376-0f38-4c6a-8263-56a762a32a7b-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "4a7e3376-0f38-4c6a-8263-56a762a32a7b" (UID: "4a7e3376-0f38-4c6a-8263-56a762a32a7b"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.889993 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a7e3376-0f38-4c6a-8263-56a762a32a7b-kube-api-access-x5zzm" (OuterVolumeSpecName: "kube-api-access-x5zzm") pod "4a7e3376-0f38-4c6a-8263-56a762a32a7b" (UID: "4a7e3376-0f38-4c6a-8263-56a762a32a7b"). InnerVolumeSpecName "kube-api-access-x5zzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.902894 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a7e3376-0f38-4c6a-8263-56a762a32a7b-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "4a7e3376-0f38-4c6a-8263-56a762a32a7b" (UID: "4a7e3376-0f38-4c6a-8263-56a762a32a7b"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.987099 4890 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a7e3376-0f38-4c6a-8263-56a762a32a7b-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.987137 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5zzm\" (UniqueName: \"kubernetes.io/projected/4a7e3376-0f38-4c6a-8263-56a762a32a7b-kube-api-access-x5zzm\") on node \"crc\" DevicePath \"\"" Jan 21 16:54:54 crc kubenswrapper[4890]: I0121 16:54:54.987147 4890 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a7e3376-0f38-4c6a-8263-56a762a32a7b-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 16:54:55 crc kubenswrapper[4890]: I0121 16:54:55.496643 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-n77xm" event={"ID":"4a7e3376-0f38-4c6a-8263-56a762a32a7b","Type":"ContainerDied","Data":"02ccca577eb45c8ec16d5969f6671260a72a925a22fd9b4e7578ef09106acc1a"} Jan 21 16:54:55 crc kubenswrapper[4890]: I0121 16:54:55.496687 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ccca577eb45c8ec16d5969f6671260a72a925a22fd9b4e7578ef09106acc1a" Jan 21 16:54:55 crc kubenswrapper[4890]: I0121 16:54:55.496699 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-n77xm" Jan 21 16:55:18 crc kubenswrapper[4890]: I0121 16:55:18.382556 4890 scope.go:117] "RemoveContainer" containerID="75735260101bba2496b0774c759dda80b93f8a9005252b3a5e675edf3c31947d" Jan 21 16:55:18 crc kubenswrapper[4890]: I0121 16:55:18.762395 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:55:18 crc kubenswrapper[4890]: I0121 16:55:18.762452 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:55:48 crc kubenswrapper[4890]: I0121 16:55:48.761809 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:55:48 crc kubenswrapper[4890]: I0121 16:55:48.762507 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:56:18 crc kubenswrapper[4890]: I0121 16:56:18.762541 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:56:18 crc kubenswrapper[4890]: I0121 16:56:18.763230 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:56:18 crc kubenswrapper[4890]: I0121 16:56:18.763282 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:56:18 crc kubenswrapper[4890]: I0121 16:56:18.764033 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"142c6fbaaaf0c0988b80a5cda216027a830094babae157afa2e11ed6dc30d815"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:56:18 crc kubenswrapper[4890]: I0121 16:56:18.764091 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://142c6fbaaaf0c0988b80a5cda216027a830094babae157afa2e11ed6dc30d815" gracePeriod=600 Jan 21 16:56:19 crc kubenswrapper[4890]: I0121 16:56:19.078083 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="142c6fbaaaf0c0988b80a5cda216027a830094babae157afa2e11ed6dc30d815" exitCode=0 Jan 21 16:56:19 crc kubenswrapper[4890]: I0121 16:56:19.078234 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"142c6fbaaaf0c0988b80a5cda216027a830094babae157afa2e11ed6dc30d815"} Jan 21 16:56:19 crc kubenswrapper[4890]: I0121 16:56:19.078512 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443"} Jan 21 16:56:19 crc kubenswrapper[4890]: I0121 16:56:19.078534 4890 scope.go:117] "RemoveContainer" containerID="95fce5dddb3ced0730d37e996b9c0e4ab2cf453151d0f9941635ab0b7bee4334" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.290923 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-grx7m"] Jan 21 16:57:15 crc kubenswrapper[4890]: E0121 16:57:15.292030 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7e3376-0f38-4c6a-8263-56a762a32a7b" containerName="storage" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.292052 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7e3376-0f38-4c6a-8263-56a762a32a7b" containerName="storage" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.292245 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a7e3376-0f38-4c6a-8263-56a762a32a7b" containerName="storage" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.293257 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.299946 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-qk8lr"] Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.301493 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.302528 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.302803 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.302956 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.302968 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r6dbs" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.306737 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.308727 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-grx7m"] Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.341920 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9z4s\" (UniqueName: \"kubernetes.io/projected/13957c23-c1bc-405b-adc0-50748d869cf7-kube-api-access-k9z4s\") pod \"dnsmasq-dns-5986db9b4f-grx7m\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.341991 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.342042 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95nj\" (UniqueName: \"kubernetes.io/projected/cf3f221a-cb1a-4582-98d7-1baca76cda7d-kube-api-access-r95nj\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.342080 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-config\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.342130 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13957c23-c1bc-405b-adc0-50748d869cf7-config\") pod \"dnsmasq-dns-5986db9b4f-grx7m\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.344342 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-qk8lr"] Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.443654 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.443735 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r95nj\" (UniqueName: \"kubernetes.io/projected/cf3f221a-cb1a-4582-98d7-1baca76cda7d-kube-api-access-r95nj\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.443776 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-config\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.443832 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13957c23-c1bc-405b-adc0-50748d869cf7-config\") pod \"dnsmasq-dns-5986db9b4f-grx7m\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.443897 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9z4s\" (UniqueName: \"kubernetes.io/projected/13957c23-c1bc-405b-adc0-50748d869cf7-kube-api-access-k9z4s\") pod \"dnsmasq-dns-5986db9b4f-grx7m\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.445092 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13957c23-c1bc-405b-adc0-50748d869cf7-config\") pod \"dnsmasq-dns-5986db9b4f-grx7m\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.445273 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-config\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.446715 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.472240 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9z4s\" (UniqueName: \"kubernetes.io/projected/13957c23-c1bc-405b-adc0-50748d869cf7-kube-api-access-k9z4s\") pod \"dnsmasq-dns-5986db9b4f-grx7m\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.474301 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r95nj\" (UniqueName: \"kubernetes.io/projected/cf3f221a-cb1a-4582-98d7-1baca76cda7d-kube-api-access-r95nj\") pod \"dnsmasq-dns-56bbd59dc5-qk8lr\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.625520 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.645843 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.904052 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-qk8lr"] Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.973829 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-sbblj"] Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.976784 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-sbblj"] Jan 21 16:57:15 crc kubenswrapper[4890]: I0121 16:57:15.977103 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.057391 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-config\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.057467 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pfnd\" (UniqueName: \"kubernetes.io/projected/061190aa-acfc-4a5e-a441-b663786a8c79-kube-api-access-8pfnd\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.057585 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-dns-svc\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.158443 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-dns-svc\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.158510 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-config\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.158584 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pfnd\" (UniqueName: \"kubernetes.io/projected/061190aa-acfc-4a5e-a441-b663786a8c79-kube-api-access-8pfnd\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.160113 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-dns-svc\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.160872 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-config\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.181010 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pfnd\" (UniqueName: \"kubernetes.io/projected/061190aa-acfc-4a5e-a441-b663786a8c79-kube-api-access-8pfnd\") pod \"dnsmasq-dns-95587bc99-sbblj\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.235519 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-grx7m"] Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.299277 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.300221 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-zpjzh"] Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.306332 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.326733 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-zpjzh"] Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.365493 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlx7\" (UniqueName: \"kubernetes.io/projected/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-kube-api-access-fdlx7\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.365568 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-config\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.365613 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.403666 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-grx7m"] Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.467662 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.467761 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlx7\" (UniqueName: \"kubernetes.io/projected/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-kube-api-access-fdlx7\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.467798 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-config\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.468886 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.472314 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-config\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.484180 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-qk8lr"] Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.494593 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" event={"ID":"13957c23-c1bc-405b-adc0-50748d869cf7","Type":"ContainerStarted","Data":"63fb566602e11ae2c4794af58f39249610b4993704d1d555fa165b6dda2a2339"} Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.495388 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlx7\" (UniqueName: \"kubernetes.io/projected/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-kube-api-access-fdlx7\") pod \"dnsmasq-dns-5d79f765b5-zpjzh\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.637428 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:16 crc kubenswrapper[4890]: I0121 16:57:16.902273 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-sbblj"] Jan 21 16:57:16 crc kubenswrapper[4890]: W0121 16:57:16.914620 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod061190aa_acfc_4a5e_a441_b663786a8c79.slice/crio-e4fc8aa6831073ba4fde49067603072ad09d4b2d5096e96f16fdb38456c1af40 WatchSource:0}: Error finding container e4fc8aa6831073ba4fde49067603072ad09d4b2d5096e96f16fdb38456c1af40: Status 404 returned error can't find the container with id e4fc8aa6831073ba4fde49067603072ad09d4b2d5096e96f16fdb38456c1af40 Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.054078 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.055914 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.058299 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.058494 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.058740 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.059056 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.059535 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7t5z4" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.059634 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.059717 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.071737 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081402 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081453 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081494 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081519 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081544 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081608 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081663 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081697 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081736 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvt6\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-kube-api-access-flvt6\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081772 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.081809 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.109935 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-zpjzh"] Jan 21 16:57:17 crc kubenswrapper[4890]: W0121 16:57:17.112827 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podace1a931_b27b_4f47_b5b7_f5bbb86d8eb2.slice/crio-2f1aec30151abc47eeacc0aad6ce396ed4a1283b9787f67ed9b84d9bf5219227 WatchSource:0}: Error finding container 2f1aec30151abc47eeacc0aad6ce396ed4a1283b9787f67ed9b84d9bf5219227: Status 404 returned error can't find the container with id 2f1aec30151abc47eeacc0aad6ce396ed4a1283b9787f67ed9b84d9bf5219227 Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183289 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183347 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183428 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183453 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183476 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flvt6\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-kube-api-access-flvt6\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183500 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183524 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183567 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183587 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183614 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.183635 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.184103 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.184278 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.184534 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.184838 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.185132 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.187811 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.187902 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.188583 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.188608 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20c8a0b071852f10b7877f1e3c3ad1fae29df389ec481cfb14b2c234343df770/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.195603 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.195939 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.204044 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flvt6\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-kube-api-access-flvt6\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.217756 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.379513 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.421505 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.422826 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: W0121 16:57:17.426600 4890 reflector.go:561] object-"openstack"/"rabbitmq-cell1-plugins-conf": failed to list *v1.ConfigMap: configmaps "rabbitmq-cell1-plugins-conf" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:57:17 crc kubenswrapper[4890]: E0121 16:57:17.426658 4890 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"rabbitmq-cell1-plugins-conf\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 16:57:17 crc kubenswrapper[4890]: W0121 16:57:17.426714 4890 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-dockercfg-wdcz7": failed to list *v1.Secret: secrets "rabbitmq-cell1-server-dockercfg-wdcz7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:57:17 crc kubenswrapper[4890]: E0121 16:57:17.426728 4890 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-dockercfg-wdcz7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"rabbitmq-cell1-server-dockercfg-wdcz7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.426934 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:57:17 crc kubenswrapper[4890]: W0121 16:57:17.427174 4890 reflector.go:561] object-"openstack"/"rabbitmq-cell1-config-data": failed to list *v1.ConfigMap: configmaps "rabbitmq-cell1-config-data" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:57:17 crc kubenswrapper[4890]: E0121 16:57:17.427195 4890 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"rabbitmq-cell1-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 16:57:17 crc kubenswrapper[4890]: W0121 16:57:17.427235 4890 reflector.go:561] object-"openstack"/"rabbitmq-cell1-erlang-cookie": failed to list *v1.Secret: secrets "rabbitmq-cell1-erlang-cookie" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:57:17 crc kubenswrapper[4890]: E0121 16:57:17.427256 4890 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"rabbitmq-cell1-erlang-cookie\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.427302 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.427484 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.449589 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.517137 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" event={"ID":"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2","Type":"ContainerStarted","Data":"2f1aec30151abc47eeacc0aad6ce396ed4a1283b9787f67ed9b84d9bf5219227"} Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.529644 4890 generic.go:334] "Generic (PLEG): container finished" podID="cf3f221a-cb1a-4582-98d7-1baca76cda7d" containerID="00e008b4078a6497f47c2fb62f9bacbbf8d8ba2aed5b952d9498f400df3c4acf" exitCode=0 Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.529968 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" event={"ID":"cf3f221a-cb1a-4582-98d7-1baca76cda7d","Type":"ContainerDied","Data":"00e008b4078a6497f47c2fb62f9bacbbf8d8ba2aed5b952d9498f400df3c4acf"} Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.529996 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" event={"ID":"cf3f221a-cb1a-4582-98d7-1baca76cda7d","Type":"ContainerStarted","Data":"635f8e9fbe97bb46c2c40b8759145f0b9f72b9fa954a80673e2ddaa877cecbfd"} Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.547551 4890 generic.go:334] "Generic (PLEG): container finished" podID="13957c23-c1bc-405b-adc0-50748d869cf7" containerID="c38a8611bd93c3f92bfd561475b247c8916bc012585c4ecac646d121171a168e" exitCode=0 Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.547623 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" event={"ID":"13957c23-c1bc-405b-adc0-50748d869cf7","Type":"ContainerDied","Data":"c38a8611bd93c3f92bfd561475b247c8916bc012585c4ecac646d121171a168e"} Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.570322 4890 generic.go:334] "Generic (PLEG): container finished" podID="061190aa-acfc-4a5e-a441-b663786a8c79" containerID="1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d" exitCode=0 Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.570458 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-sbblj" event={"ID":"061190aa-acfc-4a5e-a441-b663786a8c79","Type":"ContainerDied","Data":"1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d"} Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.570493 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-sbblj" event={"ID":"061190aa-acfc-4a5e-a441-b663786a8c79","Type":"ContainerStarted","Data":"e4fc8aa6831073ba4fde49067603072ad09d4b2d5096e96f16fdb38456c1af40"} Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588203 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588251 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e526a79-1777-471e-8030-64b43ff58732-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588275 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588319 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588346 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588404 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588433 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588455 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjcsf\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-kube-api-access-tjcsf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588509 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588539 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.588562 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689467 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689527 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e526a79-1777-471e-8030-64b43ff58732-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689554 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689604 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689632 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689652 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689690 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689716 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjcsf\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-kube-api-access-tjcsf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689778 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689816 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.689843 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.691575 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.692472 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.692687 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.702949 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.705037 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e526a79-1777-471e-8030-64b43ff58732-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.718114 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.720101 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.720143 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f034c466dd311609fabd9f20fa7bde3f7358956056613486c49868bffd168659/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.733477 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjcsf\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-kube-api-access-tjcsf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.753938 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.868971 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.944597 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.995260 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9z4s\" (UniqueName: \"kubernetes.io/projected/13957c23-c1bc-405b-adc0-50748d869cf7-kube-api-access-k9z4s\") pod \"13957c23-c1bc-405b-adc0-50748d869cf7\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.995660 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r95nj\" (UniqueName: \"kubernetes.io/projected/cf3f221a-cb1a-4582-98d7-1baca76cda7d-kube-api-access-r95nj\") pod \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.995820 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-config\") pod \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.995898 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13957c23-c1bc-405b-adc0-50748d869cf7-config\") pod \"13957c23-c1bc-405b-adc0-50748d869cf7\" (UID: \"13957c23-c1bc-405b-adc0-50748d869cf7\") " Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.996198 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-dns-svc\") pod \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\" (UID: \"cf3f221a-cb1a-4582-98d7-1baca76cda7d\") " Jan 21 16:57:17 crc kubenswrapper[4890]: I0121 16:57:17.999290 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3f221a-cb1a-4582-98d7-1baca76cda7d-kube-api-access-r95nj" (OuterVolumeSpecName: "kube-api-access-r95nj") pod "cf3f221a-cb1a-4582-98d7-1baca76cda7d" (UID: "cf3f221a-cb1a-4582-98d7-1baca76cda7d"). InnerVolumeSpecName "kube-api-access-r95nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:17.999996 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13957c23-c1bc-405b-adc0-50748d869cf7-kube-api-access-k9z4s" (OuterVolumeSpecName: "kube-api-access-k9z4s") pod "13957c23-c1bc-405b-adc0-50748d869cf7" (UID: "13957c23-c1bc-405b-adc0-50748d869cf7"). InnerVolumeSpecName "kube-api-access-k9z4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.012387 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13957c23-c1bc-405b-adc0-50748d869cf7-config" (OuterVolumeSpecName: "config") pod "13957c23-c1bc-405b-adc0-50748d869cf7" (UID: "13957c23-c1bc-405b-adc0-50748d869cf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.013000 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf3f221a-cb1a-4582-98d7-1baca76cda7d" (UID: "cf3f221a-cb1a-4582-98d7-1baca76cda7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.016049 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-config" (OuterVolumeSpecName: "config") pod "cf3f221a-cb1a-4582-98d7-1baca76cda7d" (UID: "cf3f221a-cb1a-4582-98d7-1baca76cda7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.098018 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9z4s\" (UniqueName: \"kubernetes.io/projected/13957c23-c1bc-405b-adc0-50748d869cf7-kube-api-access-k9z4s\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.098060 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r95nj\" (UniqueName: \"kubernetes.io/projected/cf3f221a-cb1a-4582-98d7-1baca76cda7d-kube-api-access-r95nj\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.098073 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.098082 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13957c23-c1bc-405b-adc0-50748d869cf7-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.098094 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf3f221a-cb1a-4582-98d7-1baca76cda7d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.117490 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:57:18 crc kubenswrapper[4890]: W0121 16:57:18.118706 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d57a8f7_2e2b_41ff_8274_2daf71db0e8f.slice/crio-75477d66223d67b5f33d1557c9134a0635f5b4ecbed941bf8c7fe3e5798dc230 WatchSource:0}: Error finding container 75477d66223d67b5f33d1557c9134a0635f5b4ecbed941bf8c7fe3e5798dc230: Status 404 returned error can't find the container with id 75477d66223d67b5f33d1557c9134a0635f5b4ecbed941bf8c7fe3e5798dc230 Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.143295 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.143617 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3f221a-cb1a-4582-98d7-1baca76cda7d" containerName="init" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.143634 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3f221a-cb1a-4582-98d7-1baca76cda7d" containerName="init" Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.143646 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13957c23-c1bc-405b-adc0-50748d869cf7" containerName="init" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.143652 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="13957c23-c1bc-405b-adc0-50748d869cf7" containerName="init" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.143784 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="13957c23-c1bc-405b-adc0-50748d869cf7" containerName="init" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.143800 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3f221a-cb1a-4582-98d7-1baca76cda7d" containerName="init" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.144499 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.147424 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.150416 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.151059 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-99zfg" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.160866 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.170483 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.184259 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.201776 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-config-data-default\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.202486 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-kolla-config\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.202523 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6d3f3b-c2f7-4793-8a37-b892f720145c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.202541 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb6d3f3b-c2f7-4793-8a37-b892f720145c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.202573 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.202593 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmz7\" (UniqueName: \"kubernetes.io/projected/bb6d3f3b-c2f7-4793-8a37-b892f720145c-kube-api-access-6mmz7\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.202697 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb6d3f3b-c2f7-4793-8a37-b892f720145c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.202716 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304332 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb6d3f3b-c2f7-4793-8a37-b892f720145c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304398 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304467 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-config-data-default\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304504 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-kolla-config\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304530 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6d3f3b-c2f7-4793-8a37-b892f720145c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304549 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb6d3f3b-c2f7-4793-8a37-b892f720145c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304589 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.304611 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmz7\" (UniqueName: \"kubernetes.io/projected/bb6d3f3b-c2f7-4793-8a37-b892f720145c-kube-api-access-6mmz7\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.305246 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb6d3f3b-c2f7-4793-8a37-b892f720145c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.306010 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-config-data-default\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.306339 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.306675 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb6d3f3b-c2f7-4793-8a37-b892f720145c-kolla-config\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.308459 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.308604 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c1099aea5c9375942f8439faa1ea0e25a5e4c892a253a2587433f17d9a80703/globalmount\"" pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.308460 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6d3f3b-c2f7-4793-8a37-b892f720145c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.308465 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb6d3f3b-c2f7-4793-8a37-b892f720145c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.322341 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmz7\" (UniqueName: \"kubernetes.io/projected/bb6d3f3b-c2f7-4793-8a37-b892f720145c-kube-api-access-6mmz7\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.337225 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e08b8fc-41c3-45f2-abbb-287d48890ff9\") pod \"openstack-galera-0\" (UID: \"bb6d3f3b-c2f7-4793-8a37-b892f720145c\") " pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.468000 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.580073 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-sbblj" event={"ID":"061190aa-acfc-4a5e-a441-b663786a8c79","Type":"ContainerStarted","Data":"23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00"} Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.580861 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.583782 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f","Type":"ContainerStarted","Data":"75477d66223d67b5f33d1557c9134a0635f5b4ecbed941bf8c7fe3e5798dc230"} Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.588513 4890 generic.go:334] "Generic (PLEG): container finished" podID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerID="60aebdc7d5972c75982e95d5607e423b0c68d03206e46e978425053e9bb5d5d8" exitCode=0 Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.588575 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" event={"ID":"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2","Type":"ContainerDied","Data":"60aebdc7d5972c75982e95d5607e423b0c68d03206e46e978425053e9bb5d5d8"} Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.602794 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.603808 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-qk8lr" event={"ID":"cf3f221a-cb1a-4582-98d7-1baca76cda7d","Type":"ContainerDied","Data":"635f8e9fbe97bb46c2c40b8759145f0b9f72b9fa954a80673e2ddaa877cecbfd"} Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.603880 4890 scope.go:117] "RemoveContainer" containerID="00e008b4078a6497f47c2fb62f9bacbbf8d8ba2aed5b952d9498f400df3c4acf" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.606697 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-sbblj" podStartSLOduration=3.606676481 podStartE2EDuration="3.606676481s" podCreationTimestamp="2026-01-21 16:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:57:18.602325066 +0000 UTC m=+5120.963767475" watchObservedRunningTime="2026-01-21 16:57:18.606676481 +0000 UTC m=+5120.968118890" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.627218 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" event={"ID":"13957c23-c1bc-405b-adc0-50748d869cf7","Type":"ContainerDied","Data":"63fb566602e11ae2c4794af58f39249610b4993704d1d555fa165b6dda2a2339"} Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.627322 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-grx7m" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.658652 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wdcz7" Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.700617 4890 secret.go:188] Couldn't get secret openstack/rabbitmq-cell1-erlang-cookie: failed to sync secret cache: timed out waiting for the condition Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.700682 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: failed to sync configmap cache: timed out waiting for the condition Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.701083 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret podName:0e526a79-1777-471e-8030-64b43ff58732 nodeName:}" failed. No retries permitted until 2026-01-21 16:57:19.201062225 +0000 UTC m=+5121.562504634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "erlang-cookie-secret" (UniqueName: "kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret") pod "rabbitmq-cell1-server-0" (UID: "0e526a79-1777-471e-8030-64b43ff58732") : failed to sync secret cache: timed out waiting for the condition Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.701120 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data podName:0e526a79-1777-471e-8030-64b43ff58732 nodeName:}" failed. No retries permitted until 2026-01-21 16:57:19.201099576 +0000 UTC m=+5121.562541985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data") pod "rabbitmq-cell1-server-0" (UID: "0e526a79-1777-471e-8030-64b43ff58732") : failed to sync configmap cache: timed out waiting for the condition Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.700745 4890 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-plugins-conf: failed to sync configmap cache: timed out waiting for the condition Jan 21 16:57:18 crc kubenswrapper[4890]: E0121 16:57:18.701148 4890 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf podName:0e526a79-1777-471e-8030-64b43ff58732 nodeName:}" failed. No retries permitted until 2026-01-21 16:57:19.201142027 +0000 UTC m=+5121.562584436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugins-conf" (UniqueName: "kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf") pod "rabbitmq-cell1-server-0" (UID: "0e526a79-1777-471e-8030-64b43ff58732") : failed to sync configmap cache: timed out waiting for the condition Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.711571 4890 scope.go:117] "RemoveContainer" containerID="c38a8611bd93c3f92bfd561475b247c8916bc012585c4ecac646d121171a168e" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.712884 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.779346 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-qk8lr"] Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.798421 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-qk8lr"] Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.813392 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-grx7m"] Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.816102 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-grx7m"] Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.838112 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.902881 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:57:18 crc kubenswrapper[4890]: I0121 16:57:18.946786 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.223008 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.223501 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.223546 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.224344 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.224795 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.228453 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.371467 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.640609 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.643894 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb6d3f3b-c2f7-4793-8a37-b892f720145c","Type":"ContainerStarted","Data":"17986707cfe4d84c828d65c69b0a22e77dce128aebd6e0477a769182cd8c538a"} Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.643946 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb6d3f3b-c2f7-4793-8a37-b892f720145c","Type":"ContainerStarted","Data":"0140182e643032ba7c5c65e714ee9f101047aef93b5b3768ac9d9db2aa5489fa"} Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.644038 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.645315 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f","Type":"ContainerStarted","Data":"356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540"} Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.646932 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gt7vl" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.647727 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.650134 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.650148 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.651393 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" event={"ID":"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2","Type":"ContainerStarted","Data":"667479ec7388c6aeecbc182d6cc7fcca76e3e9c893e3f18399a5d988f111ff08"} Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.651594 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.655924 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.724281 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" podStartSLOduration=3.7242636620000003 podStartE2EDuration="3.724263662s" podCreationTimestamp="2026-01-21 16:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:57:19.717413077 +0000 UTC m=+5122.078855486" watchObservedRunningTime="2026-01-21 16:57:19.724263662 +0000 UTC m=+5122.085706071" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731412 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec1333-3f37-4646-b941-a14dc29c7b34-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731455 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4ec1333-3f37-4646-b941-a14dc29c7b34-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731476 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731572 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731695 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731744 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4ec1333-3f37-4646-b941-a14dc29c7b34-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731783 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8dkh\" (UniqueName: \"kubernetes.io/projected/b4ec1333-3f37-4646-b941-a14dc29c7b34-kube-api-access-x8dkh\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.731810 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.823746 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:57:19 crc kubenswrapper[4890]: W0121 16:57:19.827805 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e526a79_1777_471e_8030_64b43ff58732.slice/crio-62df3dba43185ac32a6dae81552a39f53e7ce30fec7e9f887f532fceb3e63bb4 WatchSource:0}: Error finding container 62df3dba43185ac32a6dae81552a39f53e7ce30fec7e9f887f532fceb3e63bb4: Status 404 returned error can't find the container with id 62df3dba43185ac32a6dae81552a39f53e7ce30fec7e9f887f532fceb3e63bb4 Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.832923 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.835882 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.835962 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4ec1333-3f37-4646-b941-a14dc29c7b34-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.836046 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8dkh\" (UniqueName: \"kubernetes.io/projected/b4ec1333-3f37-4646-b941-a14dc29c7b34-kube-api-access-x8dkh\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.836084 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.836128 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec1333-3f37-4646-b941-a14dc29c7b34-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.836159 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4ec1333-3f37-4646-b941-a14dc29c7b34-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.836188 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.837411 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b4ec1333-3f37-4646-b941-a14dc29c7b34-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.837622 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.838110 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.838143 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ec1333-3f37-4646-b941-a14dc29c7b34-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.841331 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4ec1333-3f37-4646-b941-a14dc29c7b34-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.847043 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec1333-3f37-4646-b941-a14dc29c7b34-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.853004 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.853043 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be3e7671cfa0b9ff0426786840911f353d5bed1a272bfe7c5ea50941f085e673/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.855864 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8dkh\" (UniqueName: \"kubernetes.io/projected/b4ec1333-3f37-4646-b941-a14dc29c7b34-kube-api-access-x8dkh\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.885741 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18cd8c9b-80c8-44d3-a6a8-81b15ffe9958\") pod \"openstack-cell1-galera-0\" (UID: \"b4ec1333-3f37-4646-b941-a14dc29c7b34\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.928813 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13957c23-c1bc-405b-adc0-50748d869cf7" path="/var/lib/kubelet/pods/13957c23-c1bc-405b-adc0-50748d869cf7/volumes" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.929524 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf3f221a-cb1a-4582-98d7-1baca76cda7d" path="/var/lib/kubelet/pods/cf3f221a-cb1a-4582-98d7-1baca76cda7d/volumes" Jan 21 16:57:19 crc kubenswrapper[4890]: I0121 16:57:19.959651 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.110081 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.117453 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.130846 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.131036 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.138873 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5hnfx" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.141399 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f8509-8290-4173-ac62-16755f39ec90-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.141481 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8hs\" (UniqueName: \"kubernetes.io/projected/f51f8509-8290-4173-ac62-16755f39ec90-kube-api-access-wc8hs\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.141506 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f8509-8290-4173-ac62-16755f39ec90-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.141555 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f51f8509-8290-4173-ac62-16755f39ec90-config-data\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.141581 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f51f8509-8290-4173-ac62-16755f39ec90-kolla-config\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.160334 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.247767 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8hs\" (UniqueName: \"kubernetes.io/projected/f51f8509-8290-4173-ac62-16755f39ec90-kube-api-access-wc8hs\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.247847 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f8509-8290-4173-ac62-16755f39ec90-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.247920 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f51f8509-8290-4173-ac62-16755f39ec90-config-data\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.247960 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f51f8509-8290-4173-ac62-16755f39ec90-kolla-config\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.248037 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f8509-8290-4173-ac62-16755f39ec90-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.249303 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f51f8509-8290-4173-ac62-16755f39ec90-kolla-config\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.249982 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f51f8509-8290-4173-ac62-16755f39ec90-config-data\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.252827 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f8509-8290-4173-ac62-16755f39ec90-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.253079 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f8509-8290-4173-ac62-16755f39ec90-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.269918 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8hs\" (UniqueName: \"kubernetes.io/projected/f51f8509-8290-4173-ac62-16755f39ec90-kube-api-access-wc8hs\") pod \"memcached-0\" (UID: \"f51f8509-8290-4173-ac62-16755f39ec90\") " pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.443847 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.468493 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:57:20 crc kubenswrapper[4890]: W0121 16:57:20.472121 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4ec1333_3f37_4646_b941_a14dc29c7b34.slice/crio-faa580782e3f129659c57274f447a99feacfca33cb2e2c156bfa9b6f09e2d285 WatchSource:0}: Error finding container faa580782e3f129659c57274f447a99feacfca33cb2e2c156bfa9b6f09e2d285: Status 404 returned error can't find the container with id faa580782e3f129659c57274f447a99feacfca33cb2e2c156bfa9b6f09e2d285 Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.665699 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b4ec1333-3f37-4646-b941-a14dc29c7b34","Type":"ContainerStarted","Data":"faa580782e3f129659c57274f447a99feacfca33cb2e2c156bfa9b6f09e2d285"} Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.674607 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e526a79-1777-471e-8030-64b43ff58732","Type":"ContainerStarted","Data":"62df3dba43185ac32a6dae81552a39f53e7ce30fec7e9f887f532fceb3e63bb4"} Jan 21 16:57:20 crc kubenswrapper[4890]: I0121 16:57:20.915759 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 16:57:21 crc kubenswrapper[4890]: I0121 16:57:21.685391 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b4ec1333-3f37-4646-b941-a14dc29c7b34","Type":"ContainerStarted","Data":"1f2c01323228181f30e4f27748ec2acf30fa0628edcca24e7946b76c02e4f302"} Jan 21 16:57:21 crc kubenswrapper[4890]: I0121 16:57:21.687862 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f51f8509-8290-4173-ac62-16755f39ec90","Type":"ContainerStarted","Data":"0a469c94f85f6fc914f00a45876c66d0201fdabdf140f6d039d7449eaae06680"} Jan 21 16:57:21 crc kubenswrapper[4890]: I0121 16:57:21.687917 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f51f8509-8290-4173-ac62-16755f39ec90","Type":"ContainerStarted","Data":"7f9a03d2c4c209302a70b2493114eae4b7e773b6923e5d052a6c9ab20a5b78ca"} Jan 21 16:57:21 crc kubenswrapper[4890]: I0121 16:57:21.688188 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 16:57:21 crc kubenswrapper[4890]: I0121 16:57:21.689905 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e526a79-1777-471e-8030-64b43ff58732","Type":"ContainerStarted","Data":"a61ac33736cbb8daf1a087d6d95c631a776cf18d160f3bb0f3c794e6eca90c07"} Jan 21 16:57:21 crc kubenswrapper[4890]: I0121 16:57:21.728714 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.728688221 podStartE2EDuration="1.728688221s" podCreationTimestamp="2026-01-21 16:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:57:21.7278288 +0000 UTC m=+5124.089271219" watchObservedRunningTime="2026-01-21 16:57:21.728688221 +0000 UTC m=+5124.090130630" Jan 21 16:57:23 crc kubenswrapper[4890]: I0121 16:57:23.706116 4890 generic.go:334] "Generic (PLEG): container finished" podID="bb6d3f3b-c2f7-4793-8a37-b892f720145c" containerID="17986707cfe4d84c828d65c69b0a22e77dce128aebd6e0477a769182cd8c538a" exitCode=0 Jan 21 16:57:23 crc kubenswrapper[4890]: I0121 16:57:23.706149 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb6d3f3b-c2f7-4793-8a37-b892f720145c","Type":"ContainerDied","Data":"17986707cfe4d84c828d65c69b0a22e77dce128aebd6e0477a769182cd8c538a"} Jan 21 16:57:24 crc kubenswrapper[4890]: I0121 16:57:24.717936 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bb6d3f3b-c2f7-4793-8a37-b892f720145c","Type":"ContainerStarted","Data":"a42505744e01e066033f3b7a5a8d058bba48b5be31a7854c276c3de681d0d3be"} Jan 21 16:57:24 crc kubenswrapper[4890]: I0121 16:57:24.720306 4890 generic.go:334] "Generic (PLEG): container finished" podID="b4ec1333-3f37-4646-b941-a14dc29c7b34" containerID="1f2c01323228181f30e4f27748ec2acf30fa0628edcca24e7946b76c02e4f302" exitCode=0 Jan 21 16:57:24 crc kubenswrapper[4890]: I0121 16:57:24.720342 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b4ec1333-3f37-4646-b941-a14dc29c7b34","Type":"ContainerDied","Data":"1f2c01323228181f30e4f27748ec2acf30fa0628edcca24e7946b76c02e4f302"} Jan 21 16:57:24 crc kubenswrapper[4890]: I0121 16:57:24.742424 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.74240712 podStartE2EDuration="7.74240712s" podCreationTimestamp="2026-01-21 16:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:57:24.735924714 +0000 UTC m=+5127.097367123" watchObservedRunningTime="2026-01-21 16:57:24.74240712 +0000 UTC m=+5127.103849519" Jan 21 16:57:25 crc kubenswrapper[4890]: I0121 16:57:25.728773 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b4ec1333-3f37-4646-b941-a14dc29c7b34","Type":"ContainerStarted","Data":"3ede9095e943e4c57ebb0334fbbfdcb93a03a8c12759efe711a69c07ce550fb3"} Jan 21 16:57:25 crc kubenswrapper[4890]: I0121 16:57:25.755462 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.755443702 podStartE2EDuration="7.755443702s" podCreationTimestamp="2026-01-21 16:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:57:25.7495831 +0000 UTC m=+5128.111025509" watchObservedRunningTime="2026-01-21 16:57:25.755443702 +0000 UTC m=+5128.116886111" Jan 21 16:57:26 crc kubenswrapper[4890]: I0121 16:57:26.301441 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:26 crc kubenswrapper[4890]: I0121 16:57:26.640524 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:57:26 crc kubenswrapper[4890]: I0121 16:57:26.686524 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-sbblj"] Jan 21 16:57:26 crc kubenswrapper[4890]: I0121 16:57:26.734684 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-sbblj" podUID="061190aa-acfc-4a5e-a441-b663786a8c79" containerName="dnsmasq-dns" containerID="cri-o://23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00" gracePeriod=10 Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.679988 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.750160 4890 generic.go:334] "Generic (PLEG): container finished" podID="061190aa-acfc-4a5e-a441-b663786a8c79" containerID="23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00" exitCode=0 Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.750209 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-sbblj" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.750223 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-sbblj" event={"ID":"061190aa-acfc-4a5e-a441-b663786a8c79","Type":"ContainerDied","Data":"23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00"} Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.750640 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-sbblj" event={"ID":"061190aa-acfc-4a5e-a441-b663786a8c79","Type":"ContainerDied","Data":"e4fc8aa6831073ba4fde49067603072ad09d4b2d5096e96f16fdb38456c1af40"} Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.750659 4890 scope.go:117] "RemoveContainer" containerID="23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.767974 4890 scope.go:117] "RemoveContainer" containerID="1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.773572 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pfnd\" (UniqueName: \"kubernetes.io/projected/061190aa-acfc-4a5e-a441-b663786a8c79-kube-api-access-8pfnd\") pod \"061190aa-acfc-4a5e-a441-b663786a8c79\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.773824 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-config\") pod \"061190aa-acfc-4a5e-a441-b663786a8c79\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.773867 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-dns-svc\") pod \"061190aa-acfc-4a5e-a441-b663786a8c79\" (UID: \"061190aa-acfc-4a5e-a441-b663786a8c79\") " Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.783970 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061190aa-acfc-4a5e-a441-b663786a8c79-kube-api-access-8pfnd" (OuterVolumeSpecName: "kube-api-access-8pfnd") pod "061190aa-acfc-4a5e-a441-b663786a8c79" (UID: "061190aa-acfc-4a5e-a441-b663786a8c79"). InnerVolumeSpecName "kube-api-access-8pfnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.817845 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-config" (OuterVolumeSpecName: "config") pod "061190aa-acfc-4a5e-a441-b663786a8c79" (UID: "061190aa-acfc-4a5e-a441-b663786a8c79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.817952 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "061190aa-acfc-4a5e-a441-b663786a8c79" (UID: "061190aa-acfc-4a5e-a441-b663786a8c79"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.875546 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.875596 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/061190aa-acfc-4a5e-a441-b663786a8c79-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.875615 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pfnd\" (UniqueName: \"kubernetes.io/projected/061190aa-acfc-4a5e-a441-b663786a8c79-kube-api-access-8pfnd\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.900993 4890 scope.go:117] "RemoveContainer" containerID="23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00" Jan 21 16:57:27 crc kubenswrapper[4890]: E0121 16:57:27.929467 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00\": container with ID starting with 23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00 not found: ID does not exist" containerID="23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.929552 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00"} err="failed to get container status \"23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00\": rpc error: code = NotFound desc = could not find container \"23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00\": container with ID starting with 23fc1a00a7aa153136e7f438df4904fc58737a4b1ec220fbcc6638196d880e00 not found: ID does not exist" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.929598 4890 scope.go:117] "RemoveContainer" containerID="1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d" Jan 21 16:57:27 crc kubenswrapper[4890]: E0121 16:57:27.930388 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d\": container with ID starting with 1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d not found: ID does not exist" containerID="1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d" Jan 21 16:57:27 crc kubenswrapper[4890]: I0121 16:57:27.930487 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d"} err="failed to get container status \"1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d\": rpc error: code = NotFound desc = could not find container \"1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d\": container with ID starting with 1e9b7f6566b75f747a0fc491777b18846ab5a1e8fc04da6bf4bffcf7ca279d1d not found: ID does not exist" Jan 21 16:57:28 crc kubenswrapper[4890]: I0121 16:57:28.069924 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-sbblj"] Jan 21 16:57:28 crc kubenswrapper[4890]: I0121 16:57:28.076153 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-sbblj"] Jan 21 16:57:28 crc kubenswrapper[4890]: I0121 16:57:28.468665 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 16:57:28 crc kubenswrapper[4890]: I0121 16:57:28.468985 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 16:57:28 crc kubenswrapper[4890]: I0121 16:57:28.537231 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 16:57:28 crc kubenswrapper[4890]: I0121 16:57:28.836696 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 16:57:29 crc kubenswrapper[4890]: I0121 16:57:29.923129 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061190aa-acfc-4a5e-a441-b663786a8c79" path="/var/lib/kubelet/pods/061190aa-acfc-4a5e-a441-b663786a8c79/volumes" Jan 21 16:57:29 crc kubenswrapper[4890]: I0121 16:57:29.959787 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:29 crc kubenswrapper[4890]: I0121 16:57:29.959828 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:30 crc kubenswrapper[4890]: I0121 16:57:30.445471 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 16:57:32 crc kubenswrapper[4890]: I0121 16:57:32.267852 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:32 crc kubenswrapper[4890]: I0121 16:57:32.345701 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.072130 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vbdgj"] Jan 21 16:57:37 crc kubenswrapper[4890]: E0121 16:57:37.072926 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061190aa-acfc-4a5e-a441-b663786a8c79" containerName="dnsmasq-dns" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.072938 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="061190aa-acfc-4a5e-a441-b663786a8c79" containerName="dnsmasq-dns" Jan 21 16:57:37 crc kubenswrapper[4890]: E0121 16:57:37.072967 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061190aa-acfc-4a5e-a441-b663786a8c79" containerName="init" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.072973 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="061190aa-acfc-4a5e-a441-b663786a8c79" containerName="init" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.073102 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="061190aa-acfc-4a5e-a441-b663786a8c79" containerName="dnsmasq-dns" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.073610 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.075516 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.082722 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vbdgj"] Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.140314 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69h5x\" (UniqueName: \"kubernetes.io/projected/18aafa44-b342-4631-aa6d-0e0860698077-kube-api-access-69h5x\") pod \"root-account-create-update-vbdgj\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.140523 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18aafa44-b342-4631-aa6d-0e0860698077-operator-scripts\") pod \"root-account-create-update-vbdgj\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.241340 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18aafa44-b342-4631-aa6d-0e0860698077-operator-scripts\") pod \"root-account-create-update-vbdgj\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.241419 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69h5x\" (UniqueName: \"kubernetes.io/projected/18aafa44-b342-4631-aa6d-0e0860698077-kube-api-access-69h5x\") pod \"root-account-create-update-vbdgj\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.242252 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18aafa44-b342-4631-aa6d-0e0860698077-operator-scripts\") pod \"root-account-create-update-vbdgj\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.258448 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69h5x\" (UniqueName: \"kubernetes.io/projected/18aafa44-b342-4631-aa6d-0e0860698077-kube-api-access-69h5x\") pod \"root-account-create-update-vbdgj\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.451909 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:37 crc kubenswrapper[4890]: I0121 16:57:37.884774 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vbdgj"] Jan 21 16:57:37 crc kubenswrapper[4890]: W0121 16:57:37.888772 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18aafa44_b342_4631_aa6d_0e0860698077.slice/crio-b06630fe07f4bddc0284ffebffb5a9982577995d0bd008590472f917c7d6b458 WatchSource:0}: Error finding container b06630fe07f4bddc0284ffebffb5a9982577995d0bd008590472f917c7d6b458: Status 404 returned error can't find the container with id b06630fe07f4bddc0284ffebffb5a9982577995d0bd008590472f917c7d6b458 Jan 21 16:57:38 crc kubenswrapper[4890]: I0121 16:57:38.822940 4890 generic.go:334] "Generic (PLEG): container finished" podID="18aafa44-b342-4631-aa6d-0e0860698077" containerID="ddd603ae2c788ecc2f7372dd6b57ac470f72fee50086fc76794d013e0718b226" exitCode=0 Jan 21 16:57:38 crc kubenswrapper[4890]: I0121 16:57:38.823152 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbdgj" event={"ID":"18aafa44-b342-4631-aa6d-0e0860698077","Type":"ContainerDied","Data":"ddd603ae2c788ecc2f7372dd6b57ac470f72fee50086fc76794d013e0718b226"} Jan 21 16:57:38 crc kubenswrapper[4890]: I0121 16:57:38.824480 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbdgj" event={"ID":"18aafa44-b342-4631-aa6d-0e0860698077","Type":"ContainerStarted","Data":"b06630fe07f4bddc0284ffebffb5a9982577995d0bd008590472f917c7d6b458"} Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.115709 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.287457 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18aafa44-b342-4631-aa6d-0e0860698077-operator-scripts\") pod \"18aafa44-b342-4631-aa6d-0e0860698077\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.287536 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69h5x\" (UniqueName: \"kubernetes.io/projected/18aafa44-b342-4631-aa6d-0e0860698077-kube-api-access-69h5x\") pod \"18aafa44-b342-4631-aa6d-0e0860698077\" (UID: \"18aafa44-b342-4631-aa6d-0e0860698077\") " Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.288999 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18aafa44-b342-4631-aa6d-0e0860698077-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18aafa44-b342-4631-aa6d-0e0860698077" (UID: "18aafa44-b342-4631-aa6d-0e0860698077"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.293800 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18aafa44-b342-4631-aa6d-0e0860698077-kube-api-access-69h5x" (OuterVolumeSpecName: "kube-api-access-69h5x") pod "18aafa44-b342-4631-aa6d-0e0860698077" (UID: "18aafa44-b342-4631-aa6d-0e0860698077"). InnerVolumeSpecName "kube-api-access-69h5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.388846 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18aafa44-b342-4631-aa6d-0e0860698077-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.388882 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69h5x\" (UniqueName: \"kubernetes.io/projected/18aafa44-b342-4631-aa6d-0e0860698077-kube-api-access-69h5x\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.839095 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbdgj" event={"ID":"18aafa44-b342-4631-aa6d-0e0860698077","Type":"ContainerDied","Data":"b06630fe07f4bddc0284ffebffb5a9982577995d0bd008590472f917c7d6b458"} Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.839130 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06630fe07f4bddc0284ffebffb5a9982577995d0bd008590472f917c7d6b458" Jan 21 16:57:40 crc kubenswrapper[4890]: I0121 16:57:40.839182 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbdgj" Jan 21 16:57:43 crc kubenswrapper[4890]: I0121 16:57:43.607069 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vbdgj"] Jan 21 16:57:43 crc kubenswrapper[4890]: I0121 16:57:43.613872 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vbdgj"] Jan 21 16:57:43 crc kubenswrapper[4890]: I0121 16:57:43.923738 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18aafa44-b342-4631-aa6d-0e0860698077" path="/var/lib/kubelet/pods/18aafa44-b342-4631-aa6d-0e0860698077/volumes" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.603320 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ctxvm"] Jan 21 16:57:48 crc kubenswrapper[4890]: E0121 16:57:48.604046 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18aafa44-b342-4631-aa6d-0e0860698077" containerName="mariadb-account-create-update" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.604064 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="18aafa44-b342-4631-aa6d-0e0860698077" containerName="mariadb-account-create-update" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.604231 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="18aafa44-b342-4631-aa6d-0e0860698077" containerName="mariadb-account-create-update" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.604820 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.607486 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.610942 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ctxvm"] Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.711256 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5cp7\" (UniqueName: \"kubernetes.io/projected/4b8bafda-f7fb-4ec8-890b-6558c7685156-kube-api-access-j5cp7\") pod \"root-account-create-update-ctxvm\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.711507 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8bafda-f7fb-4ec8-890b-6558c7685156-operator-scripts\") pod \"root-account-create-update-ctxvm\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.813099 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8bafda-f7fb-4ec8-890b-6558c7685156-operator-scripts\") pod \"root-account-create-update-ctxvm\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.813584 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5cp7\" (UniqueName: \"kubernetes.io/projected/4b8bafda-f7fb-4ec8-890b-6558c7685156-kube-api-access-j5cp7\") pod \"root-account-create-update-ctxvm\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.814116 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8bafda-f7fb-4ec8-890b-6558c7685156-operator-scripts\") pod \"root-account-create-update-ctxvm\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.836590 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5cp7\" (UniqueName: \"kubernetes.io/projected/4b8bafda-f7fb-4ec8-890b-6558c7685156-kube-api-access-j5cp7\") pod \"root-account-create-update-ctxvm\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:48 crc kubenswrapper[4890]: I0121 16:57:48.920826 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:49 crc kubenswrapper[4890]: I0121 16:57:49.338748 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ctxvm"] Jan 21 16:57:49 crc kubenswrapper[4890]: I0121 16:57:49.899820 4890 generic.go:334] "Generic (PLEG): container finished" podID="4b8bafda-f7fb-4ec8-890b-6558c7685156" containerID="f738be80314898b6478801a14f07ed4ba738804a92d428847390f33970597cb5" exitCode=0 Jan 21 16:57:49 crc kubenswrapper[4890]: I0121 16:57:49.899873 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctxvm" event={"ID":"4b8bafda-f7fb-4ec8-890b-6558c7685156","Type":"ContainerDied","Data":"f738be80314898b6478801a14f07ed4ba738804a92d428847390f33970597cb5"} Jan 21 16:57:49 crc kubenswrapper[4890]: I0121 16:57:49.900183 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctxvm" event={"ID":"4b8bafda-f7fb-4ec8-890b-6558c7685156","Type":"ContainerStarted","Data":"502c2467ac76dedd6027de9de0a73f9f4527835752422f0c2c07684e312e91b6"} Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.265866 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.459796 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5cp7\" (UniqueName: \"kubernetes.io/projected/4b8bafda-f7fb-4ec8-890b-6558c7685156-kube-api-access-j5cp7\") pod \"4b8bafda-f7fb-4ec8-890b-6558c7685156\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.460385 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8bafda-f7fb-4ec8-890b-6558c7685156-operator-scripts\") pod \"4b8bafda-f7fb-4ec8-890b-6558c7685156\" (UID: \"4b8bafda-f7fb-4ec8-890b-6558c7685156\") " Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.461040 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8bafda-f7fb-4ec8-890b-6558c7685156-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b8bafda-f7fb-4ec8-890b-6558c7685156" (UID: "4b8bafda-f7fb-4ec8-890b-6558c7685156"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.466750 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8bafda-f7fb-4ec8-890b-6558c7685156-kube-api-access-j5cp7" (OuterVolumeSpecName: "kube-api-access-j5cp7") pod "4b8bafda-f7fb-4ec8-890b-6558c7685156" (UID: "4b8bafda-f7fb-4ec8-890b-6558c7685156"). InnerVolumeSpecName "kube-api-access-j5cp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.562161 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8bafda-f7fb-4ec8-890b-6558c7685156-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.562201 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5cp7\" (UniqueName: \"kubernetes.io/projected/4b8bafda-f7fb-4ec8-890b-6558c7685156-kube-api-access-j5cp7\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.914505 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctxvm" Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.915980 4890 generic.go:334] "Generic (PLEG): container finished" podID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerID="356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540" exitCode=0 Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.922443 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctxvm" event={"ID":"4b8bafda-f7fb-4ec8-890b-6558c7685156","Type":"ContainerDied","Data":"502c2467ac76dedd6027de9de0a73f9f4527835752422f0c2c07684e312e91b6"} Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.922483 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="502c2467ac76dedd6027de9de0a73f9f4527835752422f0c2c07684e312e91b6" Jan 21 16:57:51 crc kubenswrapper[4890]: I0121 16:57:51.922495 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f","Type":"ContainerDied","Data":"356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540"} Jan 21 16:57:52 crc kubenswrapper[4890]: I0121 16:57:52.927263 4890 generic.go:334] "Generic (PLEG): container finished" podID="0e526a79-1777-471e-8030-64b43ff58732" containerID="a61ac33736cbb8daf1a087d6d95c631a776cf18d160f3bb0f3c794e6eca90c07" exitCode=0 Jan 21 16:57:52 crc kubenswrapper[4890]: I0121 16:57:52.927340 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e526a79-1777-471e-8030-64b43ff58732","Type":"ContainerDied","Data":"a61ac33736cbb8daf1a087d6d95c631a776cf18d160f3bb0f3c794e6eca90c07"} Jan 21 16:57:52 crc kubenswrapper[4890]: I0121 16:57:52.932987 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f","Type":"ContainerStarted","Data":"5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4"} Jan 21 16:57:52 crc kubenswrapper[4890]: I0121 16:57:52.933731 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 16:57:52 crc kubenswrapper[4890]: I0121 16:57:52.985230 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.985211321 podStartE2EDuration="36.985211321s" podCreationTimestamp="2026-01-21 16:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:57:52.983023168 +0000 UTC m=+5155.344465577" watchObservedRunningTime="2026-01-21 16:57:52.985211321 +0000 UTC m=+5155.346653730" Jan 21 16:57:53 crc kubenswrapper[4890]: I0121 16:57:53.945287 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e526a79-1777-471e-8030-64b43ff58732","Type":"ContainerStarted","Data":"97a2f480dc5915dc92e9bf23357a91a1f538ea2ad5b13415ab05dc7d14cdb299"} Jan 21 16:57:53 crc kubenswrapper[4890]: I0121 16:57:53.945954 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:57:53 crc kubenswrapper[4890]: I0121 16:57:53.976522 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.976352222 podStartE2EDuration="37.976352222s" podCreationTimestamp="2026-01-21 16:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:57:53.973198746 +0000 UTC m=+5156.334641165" watchObservedRunningTime="2026-01-21 16:57:53.976352222 +0000 UTC m=+5156.337794631" Jan 21 16:58:07 crc kubenswrapper[4890]: I0121 16:58:07.381531 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 16:58:09 crc kubenswrapper[4890]: I0121 16:58:09.374507 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.516155 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-b77mx"] Jan 21 16:58:16 crc kubenswrapper[4890]: E0121 16:58:16.517370 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8bafda-f7fb-4ec8-890b-6558c7685156" containerName="mariadb-account-create-update" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.517390 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8bafda-f7fb-4ec8-890b-6558c7685156" containerName="mariadb-account-create-update" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.517577 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8bafda-f7fb-4ec8-890b-6558c7685156" containerName="mariadb-account-create-update" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.518646 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.535763 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-b77mx"] Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.635455 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbg8\" (UniqueName: \"kubernetes.io/projected/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-kube-api-access-dlbg8\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.635546 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-config\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.635772 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-dns-svc\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.737123 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-dns-svc\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.737203 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbg8\" (UniqueName: \"kubernetes.io/projected/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-kube-api-access-dlbg8\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.737279 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-config\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.738160 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-dns-svc\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.738200 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-config\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.755903 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbg8\" (UniqueName: \"kubernetes.io/projected/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-kube-api-access-dlbg8\") pod \"dnsmasq-dns-699964fbc-b77mx\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:16 crc kubenswrapper[4890]: I0121 16:58:16.862734 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:17 crc kubenswrapper[4890]: I0121 16:58:17.175793 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:58:17 crc kubenswrapper[4890]: I0121 16:58:17.345237 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-b77mx"] Jan 21 16:58:17 crc kubenswrapper[4890]: I0121 16:58:17.762518 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:58:18 crc kubenswrapper[4890]: I0121 16:58:18.142820 4890 generic.go:334] "Generic (PLEG): container finished" podID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerID="85b521e5752cdd4e5ef8db6630d0e0d520cc843370b388c0e6ec03c6b912ebc7" exitCode=0 Jan 21 16:58:18 crc kubenswrapper[4890]: I0121 16:58:18.143124 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-b77mx" event={"ID":"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b","Type":"ContainerDied","Data":"85b521e5752cdd4e5ef8db6630d0e0d520cc843370b388c0e6ec03c6b912ebc7"} Jan 21 16:58:18 crc kubenswrapper[4890]: I0121 16:58:18.143273 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-b77mx" event={"ID":"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b","Type":"ContainerStarted","Data":"84a5c528a0d2e91b7cc305373b91a4a25700cf881454e5b0780f41644d770608"} Jan 21 16:58:19 crc kubenswrapper[4890]: I0121 16:58:19.151297 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-b77mx" event={"ID":"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b","Type":"ContainerStarted","Data":"9bad55231cf2b6afbc5023006945a275739f16f2298dedd54693ee5a0d5d7fde"} Jan 21 16:58:19 crc kubenswrapper[4890]: I0121 16:58:19.151716 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:19 crc kubenswrapper[4890]: I0121 16:58:19.170316 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-b77mx" podStartSLOduration=3.170299692 podStartE2EDuration="3.170299692s" podCreationTimestamp="2026-01-21 16:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:58:19.167448203 +0000 UTC m=+5181.528890612" watchObservedRunningTime="2026-01-21 16:58:19.170299692 +0000 UTC m=+5181.531742101" Jan 21 16:58:21 crc kubenswrapper[4890]: I0121 16:58:21.309062 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerName="rabbitmq" containerID="cri-o://5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4" gracePeriod=604796 Jan 21 16:58:21 crc kubenswrapper[4890]: I0121 16:58:21.722513 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0e526a79-1777-471e-8030-64b43ff58732" containerName="rabbitmq" containerID="cri-o://97a2f480dc5915dc92e9bf23357a91a1f538ea2ad5b13415ab05dc7d14cdb299" gracePeriod=604797 Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.717864 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dt69t"] Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.720147 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.735142 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dt69t"] Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.858924 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-catalog-content\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.859148 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-utilities\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.859260 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwt2\" (UniqueName: \"kubernetes.io/projected/72d7c371-48e7-4378-8e77-ca630199e3fb-kube-api-access-lmwt2\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.916163 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qklmj"] Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.918116 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.930321 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qklmj"] Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.960825 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-catalog-content\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.961011 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-utilities\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.961073 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwt2\" (UniqueName: \"kubernetes.io/projected/72d7c371-48e7-4378-8e77-ca630199e3fb-kube-api-access-lmwt2\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.961546 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-catalog-content\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.961577 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-utilities\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:24 crc kubenswrapper[4890]: I0121 16:58:24.984959 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwt2\" (UniqueName: \"kubernetes.io/projected/72d7c371-48e7-4378-8e77-ca630199e3fb-kube-api-access-lmwt2\") pod \"redhat-marketplace-dt69t\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.040385 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.062021 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-catalog-content\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.062091 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-utilities\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.062128 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76bn\" (UniqueName: \"kubernetes.io/projected/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-kube-api-access-w76bn\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.164105 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-catalog-content\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.164400 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-utilities\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.164430 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76bn\" (UniqueName: \"kubernetes.io/projected/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-kube-api-access-w76bn\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.164717 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-utilities\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.170221 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-catalog-content\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.185121 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76bn\" (UniqueName: \"kubernetes.io/projected/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-kube-api-access-w76bn\") pod \"redhat-operators-qklmj\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.239673 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.533993 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dt69t"] Jan 21 16:58:25 crc kubenswrapper[4890]: I0121 16:58:25.755925 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qklmj"] Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.205468 4890 generic.go:334] "Generic (PLEG): container finished" podID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerID="4dfb584954008f887aa6e76e724a39d8e832779b9890e52312e95bba3df1acf4" exitCode=0 Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.205547 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dt69t" event={"ID":"72d7c371-48e7-4378-8e77-ca630199e3fb","Type":"ContainerDied","Data":"4dfb584954008f887aa6e76e724a39d8e832779b9890e52312e95bba3df1acf4"} Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.205832 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dt69t" event={"ID":"72d7c371-48e7-4378-8e77-ca630199e3fb","Type":"ContainerStarted","Data":"1e2df865a4024904fc9a24abe673895e35828ac98c1fc05529ccac5094d5bfdd"} Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.208068 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.208239 4890 generic.go:334] "Generic (PLEG): container finished" podID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerID="ca28e35411d8d5873c72c6fcda3980fbf1ea0cb8c34cc2701227e1eb62f7b4b5" exitCode=0 Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.208336 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qklmj" event={"ID":"b2d2dc5d-6583-425a-8a8e-068ea7a715b2","Type":"ContainerDied","Data":"ca28e35411d8d5873c72c6fcda3980fbf1ea0cb8c34cc2701227e1eb62f7b4b5"} Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.208506 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qklmj" event={"ID":"b2d2dc5d-6583-425a-8a8e-068ea7a715b2","Type":"ContainerStarted","Data":"48d21873ca40d06746954518a49af9c7213dedb5d488c3694c00661701b4c966"} Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.865168 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.915831 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-zpjzh"] Jan 21 16:58:26 crc kubenswrapper[4890]: I0121 16:58:26.916134 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" podUID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerName="dnsmasq-dns" containerID="cri-o://667479ec7388c6aeecbc182d6cc7fcca76e3e9c893e3f18399a5d988f111ff08" gracePeriod=10 Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.228502 4890 generic.go:334] "Generic (PLEG): container finished" podID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerID="667479ec7388c6aeecbc182d6cc7fcca76e3e9c893e3f18399a5d988f111ff08" exitCode=0 Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.228716 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" event={"ID":"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2","Type":"ContainerDied","Data":"667479ec7388c6aeecbc182d6cc7fcca76e3e9c893e3f18399a5d988f111ff08"} Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.380091 4890 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.243:5671: connect: connection refused" Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.399653 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.510593 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-config\") pod \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.510672 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-dns-svc\") pod \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.510698 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdlx7\" (UniqueName: \"kubernetes.io/projected/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-kube-api-access-fdlx7\") pod \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\" (UID: \"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2\") " Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.526835 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-kube-api-access-fdlx7" (OuterVolumeSpecName: "kube-api-access-fdlx7") pod "ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" (UID: "ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2"). InnerVolumeSpecName "kube-api-access-fdlx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.545741 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-config" (OuterVolumeSpecName: "config") pod "ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" (UID: "ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.546105 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" (UID: "ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.612830 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.612877 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:27 crc kubenswrapper[4890]: I0121 16:58:27.612892 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdlx7\" (UniqueName: \"kubernetes.io/projected/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2-kube-api-access-fdlx7\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.079776 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.126893 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-pod-info\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.126956 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-plugins-conf\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.126984 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-config-data\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127012 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flvt6\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-kube-api-access-flvt6\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127072 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-server-conf\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127094 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-erlang-cookie\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127121 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-plugins\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127259 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127276 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-confd\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127307 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-tls\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127340 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-erlang-cookie-secret\") pod \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\" (UID: \"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f\") " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.127811 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.128758 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.129509 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.133584 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-kube-api-access-flvt6" (OuterVolumeSpecName: "kube-api-access-flvt6") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "kube-api-access-flvt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.139009 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-pod-info" (OuterVolumeSpecName: "pod-info") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.142701 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.144900 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0" (OuterVolumeSpecName: "persistence") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.164238 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-config-data" (OuterVolumeSpecName: "config-data") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.166575 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.203303 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-server-conf" (OuterVolumeSpecName: "server-conf") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228820 4890 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228855 4890 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228863 4890 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228871 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228881 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flvt6\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-kube-api-access-flvt6\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228891 4890 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228900 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228909 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228940 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") on node \"crc\" " Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.228951 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.248506 4890 generic.go:334] "Generic (PLEG): container finished" podID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerID="5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4" exitCode=0 Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.248940 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f","Type":"ContainerDied","Data":"5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4"} Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.248974 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d57a8f7-2e2b-41ff-8274-2daf71db0e8f","Type":"ContainerDied","Data":"75477d66223d67b5f33d1557c9134a0635f5b4ecbed941bf8c7fe3e5798dc230"} Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.248993 4890 scope.go:117] "RemoveContainer" containerID="5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.249151 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.253984 4890 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.254148 4890 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0") on node "crc" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.254362 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" event={"ID":"ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2","Type":"ContainerDied","Data":"2f1aec30151abc47eeacc0aad6ce396ed4a1283b9787f67ed9b84d9bf5219227"} Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.254404 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-zpjzh" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.259096 4890 generic.go:334] "Generic (PLEG): container finished" podID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerID="fbda557974367e6a765a256c185b2982a623f01f86fabb0956ece7bc1b5692e7" exitCode=0 Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.259236 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dt69t" event={"ID":"72d7c371-48e7-4378-8e77-ca630199e3fb","Type":"ContainerDied","Data":"fbda557974367e6a765a256c185b2982a623f01f86fabb0956ece7bc1b5692e7"} Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.263663 4890 generic.go:334] "Generic (PLEG): container finished" podID="0e526a79-1777-471e-8030-64b43ff58732" containerID="97a2f480dc5915dc92e9bf23357a91a1f538ea2ad5b13415ab05dc7d14cdb299" exitCode=0 Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.263753 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e526a79-1777-471e-8030-64b43ff58732","Type":"ContainerDied","Data":"97a2f480dc5915dc92e9bf23357a91a1f538ea2ad5b13415ab05dc7d14cdb299"} Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.266219 4890 generic.go:334] "Generic (PLEG): container finished" podID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerID="4f4c3596372f59097945b59efef5f8aa502d20aa739de724724c80e93cf600bb" exitCode=0 Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.266250 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qklmj" event={"ID":"b2d2dc5d-6583-425a-8a8e-068ea7a715b2","Type":"ContainerDied","Data":"4f4c3596372f59097945b59efef5f8aa502d20aa739de724724c80e93cf600bb"} Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.323737 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" (UID: "6d57a8f7-2e2b-41ff-8274-2daf71db0e8f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.330867 4890 reconciler_common.go:293] "Volume detached for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:28 crc kubenswrapper[4890]: I0121 16:58:28.330920 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.347202 4890 scope.go:117] "RemoveContainer" containerID="356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.354888 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.377314 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-zpjzh"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.380830 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-zpjzh"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.397559 4890 scope.go:117] "RemoveContainer" containerID="5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4" Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.398693 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4\": container with ID starting with 5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4 not found: ID does not exist" containerID="5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.398789 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4"} err="failed to get container status \"5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4\": rpc error: code = NotFound desc = could not find container \"5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4\": container with ID starting with 5f34f99d1b131431edb08a423e9b2d0abe01e0a2729c9d80d3b4a1cafee336b4 not found: ID does not exist" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.398828 4890 scope.go:117] "RemoveContainer" containerID="356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540" Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.399179 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540\": container with ID starting with 356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540 not found: ID does not exist" containerID="356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.399204 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540"} err="failed to get container status \"356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540\": rpc error: code = NotFound desc = could not find container \"356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540\": container with ID starting with 356faa21c0626ee866465fe607cdb0294b3ce84ce2689618779a1d7980aac540 not found: ID does not exist" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.399223 4890 scope.go:117] "RemoveContainer" containerID="667479ec7388c6aeecbc182d6cc7fcca76e3e9c893e3f18399a5d988f111ff08" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.431951 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjcsf\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-kube-api-access-tjcsf\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432059 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-tls\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432095 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e526a79-1777-471e-8030-64b43ff58732-pod-info\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432122 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-plugins\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432144 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-server-conf\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432674 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432707 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-erlang-cookie\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432746 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.432778 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.433632 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.433671 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-confd\") pod \"0e526a79-1777-471e-8030-64b43ff58732\" (UID: \"0e526a79-1777-471e-8030-64b43ff58732\") " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.438912 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.444507 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0e526a79-1777-471e-8030-64b43ff58732-pod-info" (OuterVolumeSpecName: "pod-info") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.444893 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.447471 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.447693 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.451583 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.453642 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-kube-api-access-tjcsf" (OuterVolumeSpecName: "kube-api-access-tjcsf") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "kube-api-access-tjcsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.463537 4890 scope.go:117] "RemoveContainer" containerID="60aebdc7d5972c75982e95d5607e423b0c68d03206e46e978425053e9bb5d5d8" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.467391 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399" (OuterVolumeSpecName: "persistence") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "pvc-f02d4df8-b7fd-4807-ad12-398b65834399". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.481702 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data" (OuterVolumeSpecName: "config-data") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.499920 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-server-conf" (OuterVolumeSpecName: "server-conf") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.536964 4890 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") on node \"crc\" " Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537006 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjcsf\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-kube-api-access-tjcsf\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537021 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537033 4890 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e526a79-1777-471e-8030-64b43ff58732-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537047 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537058 4890 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537070 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537082 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537096 4890 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e526a79-1777-471e-8030-64b43ff58732-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.537108 4890 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e526a79-1777-471e-8030-64b43ff58732-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.542815 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0e526a79-1777-471e-8030-64b43ff58732" (UID: "0e526a79-1777-471e-8030-64b43ff58732"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.555753 4890 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.555915 4890 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f02d4df8-b7fd-4807-ad12-398b65834399" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399") on node "crc" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.585656 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.593845 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.616886 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.617202 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerName="setup-container" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617220 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerName="setup-container" Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.617234 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e526a79-1777-471e-8030-64b43ff58732" containerName="setup-container" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617242 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e526a79-1777-471e-8030-64b43ff58732" containerName="setup-container" Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.617262 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e526a79-1777-471e-8030-64b43ff58732" containerName="rabbitmq" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617270 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e526a79-1777-471e-8030-64b43ff58732" containerName="rabbitmq" Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.617282 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerName="dnsmasq-dns" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617289 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerName="dnsmasq-dns" Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.617309 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerName="init" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617316 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerName="init" Jan 21 16:58:29 crc kubenswrapper[4890]: E0121 16:58:28.617327 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerName="rabbitmq" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617334 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerName="rabbitmq" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617576 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" containerName="rabbitmq" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617599 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e526a79-1777-471e-8030-64b43ff58732" containerName="rabbitmq" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.617609 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" containerName="dnsmasq-dns" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.618433 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.624509 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.624569 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.624574 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.624655 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.624672 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7t5z4" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.624655 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.624833 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638505 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6xc\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-kube-api-access-4s6xc\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638530 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638553 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638573 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638624 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638643 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638664 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638680 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638718 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638745 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638766 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638813 4890 reconciler_common.go:293] "Volume detached for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638827 4890 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e526a79-1777-471e-8030-64b43ff58732-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.638872 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743039 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743088 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743115 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743140 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743204 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743239 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743261 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743293 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6xc\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-kube-api-access-4s6xc\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743312 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743338 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743376 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.743927 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.744093 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.744854 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.747268 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.753223 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.753268 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20c8a0b071852f10b7877f1e3c3ad1fae29df389ec481cfb14b2c234343df770/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.753467 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.760486 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.776023 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.777817 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.781228 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.808847 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6xc\" (UniqueName: \"kubernetes.io/projected/bb70f116-e2f7-4501-87fa-d519a1d6d3f9-kube-api-access-4s6xc\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.880775 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3025d9c-38ce-4d86-82da-d4825fa933e0\") pod \"rabbitmq-server-0\" (UID: \"bb70f116-e2f7-4501-87fa-d519a1d6d3f9\") " pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:28.941765 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.275435 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.275609 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e526a79-1777-471e-8030-64b43ff58732","Type":"ContainerDied","Data":"62df3dba43185ac32a6dae81552a39f53e7ce30fec7e9f887f532fceb3e63bb4"} Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.275914 4890 scope.go:117] "RemoveContainer" containerID="97a2f480dc5915dc92e9bf23357a91a1f538ea2ad5b13415ab05dc7d14cdb299" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.283715 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qklmj" event={"ID":"b2d2dc5d-6583-425a-8a8e-068ea7a715b2","Type":"ContainerStarted","Data":"dfe8fd49f86dcd75ec2597094ad6081d19a95a93225b7b7787cba3a6fa4faf61"} Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.293696 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dt69t" event={"ID":"72d7c371-48e7-4378-8e77-ca630199e3fb","Type":"ContainerStarted","Data":"23deb30d36b455cca7d0dd63ffb0ee9327f8fe01224bb45f64f8324c8ec08abe"} Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.297267 4890 scope.go:117] "RemoveContainer" containerID="a61ac33736cbb8daf1a087d6d95c631a776cf18d160f3bb0f3c794e6eca90c07" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.305653 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qklmj" podStartSLOduration=2.4861601970000002 podStartE2EDuration="5.305632776s" podCreationTimestamp="2026-01-21 16:58:24 +0000 UTC" firstStartedPulling="2026-01-21 16:58:26.210031995 +0000 UTC m=+5188.571474404" lastFinishedPulling="2026-01-21 16:58:29.029504574 +0000 UTC m=+5191.390946983" observedRunningTime="2026-01-21 16:58:29.302776816 +0000 UTC m=+5191.664219255" watchObservedRunningTime="2026-01-21 16:58:29.305632776 +0000 UTC m=+5191.667075195" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.339942 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.353779 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.361618 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dt69t" podStartSLOduration=2.555593576 podStartE2EDuration="5.361589439s" podCreationTimestamp="2026-01-21 16:58:24 +0000 UTC" firstStartedPulling="2026-01-21 16:58:26.207711019 +0000 UTC m=+5188.569153428" lastFinishedPulling="2026-01-21 16:58:29.013706882 +0000 UTC m=+5191.375149291" observedRunningTime="2026-01-21 16:58:29.345208783 +0000 UTC m=+5191.706651192" watchObservedRunningTime="2026-01-21 16:58:29.361589439 +0000 UTC m=+5191.723031858" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.370018 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.374308 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.376026 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.376262 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wdcz7" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.376437 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.376555 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.376743 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.376909 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.379511 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.383181 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560120 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560166 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560189 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560208 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560230 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560270 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53f07946-ba37-4267-af0a-6071177f2a6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560290 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2tr9\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-kube-api-access-n2tr9\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560346 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53f07946-ba37-4267-af0a-6071177f2a6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560386 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560432 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.560455 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.582227 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.661665 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53f07946-ba37-4267-af0a-6071177f2a6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662082 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tr9\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-kube-api-access-n2tr9\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662156 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53f07946-ba37-4267-af0a-6071177f2a6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662180 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662209 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662239 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662271 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662296 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662319 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.663455 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.662340 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.663552 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.667166 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.667205 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f034c466dd311609fabd9f20fa7bde3f7358956056613486c49868bffd168659/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.667684 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.668068 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.668325 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.668822 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.669264 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53f07946-ba37-4267-af0a-6071177f2a6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.669337 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53f07946-ba37-4267-af0a-6071177f2a6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.669340 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53f07946-ba37-4267-af0a-6071177f2a6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.670556 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.681298 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2tr9\" (UniqueName: \"kubernetes.io/projected/53f07946-ba37-4267-af0a-6071177f2a6d-kube-api-access-n2tr9\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.702180 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f02d4df8-b7fd-4807-ad12-398b65834399\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f02d4df8-b7fd-4807-ad12-398b65834399\") pod \"rabbitmq-cell1-server-0\" (UID: \"53f07946-ba37-4267-af0a-6071177f2a6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.923382 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e526a79-1777-471e-8030-64b43ff58732" path="/var/lib/kubelet/pods/0e526a79-1777-471e-8030-64b43ff58732/volumes" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.924134 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d57a8f7-2e2b-41ff-8274-2daf71db0e8f" path="/var/lib/kubelet/pods/6d57a8f7-2e2b-41ff-8274-2daf71db0e8f/volumes" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.925112 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2" path="/var/lib/kubelet/pods/ace1a931-b27b-4f47-b5b7-f5bbb86d8eb2/volumes" Jan 21 16:58:29 crc kubenswrapper[4890]: I0121 16:58:29.996985 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:58:30 crc kubenswrapper[4890]: I0121 16:58:30.302925 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb70f116-e2f7-4501-87fa-d519a1d6d3f9","Type":"ContainerStarted","Data":"3e2c97c2749ee5a28c9d2387bbcb110ffd45c9981197c995140533f913877ca8"} Jan 21 16:58:30 crc kubenswrapper[4890]: I0121 16:58:30.499464 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:58:31 crc kubenswrapper[4890]: I0121 16:58:31.314103 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb70f116-e2f7-4501-87fa-d519a1d6d3f9","Type":"ContainerStarted","Data":"c476672f0a790cd558520b3128e401ad34c00d82c01c5bea3e4fac0538a94a22"} Jan 21 16:58:31 crc kubenswrapper[4890]: I0121 16:58:31.316094 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53f07946-ba37-4267-af0a-6071177f2a6d","Type":"ContainerStarted","Data":"ae4e4f5212825f453d2d6619795493f32655ab57de7b789c13094d79175330e2"} Jan 21 16:58:32 crc kubenswrapper[4890]: I0121 16:58:32.324600 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53f07946-ba37-4267-af0a-6071177f2a6d","Type":"ContainerStarted","Data":"62c4ce7ea39217c95b4d49ea5bb1f10cabab4329ddeb1d6b983c6c6e4ce84461"} Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.040660 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.041079 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.090223 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.240443 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.240710 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.278877 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.389471 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:35 crc kubenswrapper[4890]: I0121 16:58:35.398843 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:37 crc kubenswrapper[4890]: I0121 16:58:37.126657 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qklmj"] Jan 21 16:58:37 crc kubenswrapper[4890]: I0121 16:58:37.723527 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dt69t"] Jan 21 16:58:37 crc kubenswrapper[4890]: I0121 16:58:37.723773 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dt69t" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="registry-server" containerID="cri-o://23deb30d36b455cca7d0dd63ffb0ee9327f8fe01224bb45f64f8324c8ec08abe" gracePeriod=2 Jan 21 16:58:38 crc kubenswrapper[4890]: I0121 16:58:38.369574 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qklmj" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="registry-server" containerID="cri-o://dfe8fd49f86dcd75ec2597094ad6081d19a95a93225b7b7787cba3a6fa4faf61" gracePeriod=2 Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.385050 4890 generic.go:334] "Generic (PLEG): container finished" podID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerID="23deb30d36b455cca7d0dd63ffb0ee9327f8fe01224bb45f64f8324c8ec08abe" exitCode=0 Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.385784 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dt69t" event={"ID":"72d7c371-48e7-4378-8e77-ca630199e3fb","Type":"ContainerDied","Data":"23deb30d36b455cca7d0dd63ffb0ee9327f8fe01224bb45f64f8324c8ec08abe"} Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.388002 4890 generic.go:334] "Generic (PLEG): container finished" podID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerID="dfe8fd49f86dcd75ec2597094ad6081d19a95a93225b7b7787cba3a6fa4faf61" exitCode=0 Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.388037 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qklmj" event={"ID":"b2d2dc5d-6583-425a-8a8e-068ea7a715b2","Type":"ContainerDied","Data":"dfe8fd49f86dcd75ec2597094ad6081d19a95a93225b7b7787cba3a6fa4faf61"} Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.627035 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.738919 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-catalog-content\") pod \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.738995 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w76bn\" (UniqueName: \"kubernetes.io/projected/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-kube-api-access-w76bn\") pod \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.739030 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-utilities\") pod \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\" (UID: \"b2d2dc5d-6583-425a-8a8e-068ea7a715b2\") " Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.740085 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-utilities" (OuterVolumeSpecName: "utilities") pod "b2d2dc5d-6583-425a-8a8e-068ea7a715b2" (UID: "b2d2dc5d-6583-425a-8a8e-068ea7a715b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.745962 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-kube-api-access-w76bn" (OuterVolumeSpecName: "kube-api-access-w76bn") pod "b2d2dc5d-6583-425a-8a8e-068ea7a715b2" (UID: "b2d2dc5d-6583-425a-8a8e-068ea7a715b2"). InnerVolumeSpecName "kube-api-access-w76bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.841054 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w76bn\" (UniqueName: \"kubernetes.io/projected/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-kube-api-access-w76bn\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.841102 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.855714 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.870451 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2d2dc5d-6583-425a-8a8e-068ea7a715b2" (UID: "b2d2dc5d-6583-425a-8a8e-068ea7a715b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.942032 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmwt2\" (UniqueName: \"kubernetes.io/projected/72d7c371-48e7-4378-8e77-ca630199e3fb-kube-api-access-lmwt2\") pod \"72d7c371-48e7-4378-8e77-ca630199e3fb\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.942158 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-utilities\") pod \"72d7c371-48e7-4378-8e77-ca630199e3fb\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.942232 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-catalog-content\") pod \"72d7c371-48e7-4378-8e77-ca630199e3fb\" (UID: \"72d7c371-48e7-4378-8e77-ca630199e3fb\") " Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.942512 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d2dc5d-6583-425a-8a8e-068ea7a715b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.943013 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-utilities" (OuterVolumeSpecName: "utilities") pod "72d7c371-48e7-4378-8e77-ca630199e3fb" (UID: "72d7c371-48e7-4378-8e77-ca630199e3fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.945851 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d7c371-48e7-4378-8e77-ca630199e3fb-kube-api-access-lmwt2" (OuterVolumeSpecName: "kube-api-access-lmwt2") pod "72d7c371-48e7-4378-8e77-ca630199e3fb" (UID: "72d7c371-48e7-4378-8e77-ca630199e3fb"). InnerVolumeSpecName "kube-api-access-lmwt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:40 crc kubenswrapper[4890]: I0121 16:58:40.963801 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72d7c371-48e7-4378-8e77-ca630199e3fb" (UID: "72d7c371-48e7-4378-8e77-ca630199e3fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.044173 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.044327 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmwt2\" (UniqueName: \"kubernetes.io/projected/72d7c371-48e7-4378-8e77-ca630199e3fb-kube-api-access-lmwt2\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.044370 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d7c371-48e7-4378-8e77-ca630199e3fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.396160 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dt69t" event={"ID":"72d7c371-48e7-4378-8e77-ca630199e3fb","Type":"ContainerDied","Data":"1e2df865a4024904fc9a24abe673895e35828ac98c1fc05529ccac5094d5bfdd"} Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.396185 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dt69t" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.396219 4890 scope.go:117] "RemoveContainer" containerID="23deb30d36b455cca7d0dd63ffb0ee9327f8fe01224bb45f64f8324c8ec08abe" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.399529 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qklmj" event={"ID":"b2d2dc5d-6583-425a-8a8e-068ea7a715b2","Type":"ContainerDied","Data":"48d21873ca40d06746954518a49af9c7213dedb5d488c3694c00661701b4c966"} Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.399597 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qklmj" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.416988 4890 scope.go:117] "RemoveContainer" containerID="fbda557974367e6a765a256c185b2982a623f01f86fabb0956ece7bc1b5692e7" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.444946 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dt69t"] Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.454207 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dt69t"] Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.456214 4890 scope.go:117] "RemoveContainer" containerID="4dfb584954008f887aa6e76e724a39d8e832779b9890e52312e95bba3df1acf4" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.462610 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qklmj"] Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.468686 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qklmj"] Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.473087 4890 scope.go:117] "RemoveContainer" containerID="dfe8fd49f86dcd75ec2597094ad6081d19a95a93225b7b7787cba3a6fa4faf61" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.487437 4890 scope.go:117] "RemoveContainer" containerID="4f4c3596372f59097945b59efef5f8aa502d20aa739de724724c80e93cf600bb" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.506535 4890 scope.go:117] "RemoveContainer" containerID="ca28e35411d8d5873c72c6fcda3980fbf1ea0cb8c34cc2701227e1eb62f7b4b5" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.923473 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" path="/var/lib/kubelet/pods/72d7c371-48e7-4378-8e77-ca630199e3fb/volumes" Jan 21 16:58:41 crc kubenswrapper[4890]: I0121 16:58:41.924261 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" path="/var/lib/kubelet/pods/b2d2dc5d-6583-425a-8a8e-068ea7a715b2/volumes" Jan 21 16:58:48 crc kubenswrapper[4890]: I0121 16:58:48.763142 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:58:48 crc kubenswrapper[4890]: I0121 16:58:48.763709 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:03 crc kubenswrapper[4890]: I0121 16:59:03.561759 4890 generic.go:334] "Generic (PLEG): container finished" podID="bb70f116-e2f7-4501-87fa-d519a1d6d3f9" containerID="c476672f0a790cd558520b3128e401ad34c00d82c01c5bea3e4fac0538a94a22" exitCode=0 Jan 21 16:59:03 crc kubenswrapper[4890]: I0121 16:59:03.561848 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb70f116-e2f7-4501-87fa-d519a1d6d3f9","Type":"ContainerDied","Data":"c476672f0a790cd558520b3128e401ad34c00d82c01c5bea3e4fac0538a94a22"} Jan 21 16:59:04 crc kubenswrapper[4890]: I0121 16:59:04.569026 4890 generic.go:334] "Generic (PLEG): container finished" podID="53f07946-ba37-4267-af0a-6071177f2a6d" containerID="62c4ce7ea39217c95b4d49ea5bb1f10cabab4329ddeb1d6b983c6c6e4ce84461" exitCode=0 Jan 21 16:59:04 crc kubenswrapper[4890]: I0121 16:59:04.569095 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53f07946-ba37-4267-af0a-6071177f2a6d","Type":"ContainerDied","Data":"62c4ce7ea39217c95b4d49ea5bb1f10cabab4329ddeb1d6b983c6c6e4ce84461"} Jan 21 16:59:04 crc kubenswrapper[4890]: I0121 16:59:04.571445 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb70f116-e2f7-4501-87fa-d519a1d6d3f9","Type":"ContainerStarted","Data":"ff3cd8c237bbd7975255c1e75889d21f32637abed93833e76068c7fb7945b7aa"} Jan 21 16:59:04 crc kubenswrapper[4890]: I0121 16:59:04.572268 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 16:59:04 crc kubenswrapper[4890]: I0121 16:59:04.638721 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.638699732 podStartE2EDuration="36.638699732s" podCreationTimestamp="2026-01-21 16:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:59:04.629751105 +0000 UTC m=+5226.991193514" watchObservedRunningTime="2026-01-21 16:59:04.638699732 +0000 UTC m=+5227.000142141" Jan 21 16:59:05 crc kubenswrapper[4890]: I0121 16:59:05.581024 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53f07946-ba37-4267-af0a-6071177f2a6d","Type":"ContainerStarted","Data":"0c54b6bbd6b586be08504db12a8b516fc58f72f6eef68db1fa0c9def4781c1cc"} Jan 21 16:59:05 crc kubenswrapper[4890]: I0121 16:59:05.581842 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:59:05 crc kubenswrapper[4890]: I0121 16:59:05.612781 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.612754009 podStartE2EDuration="36.612754009s" podCreationTimestamp="2026-01-21 16:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:59:05.608208529 +0000 UTC m=+5227.969650938" watchObservedRunningTime="2026-01-21 16:59:05.612754009 +0000 UTC m=+5227.974196418" Jan 21 16:59:18 crc kubenswrapper[4890]: I0121 16:59:18.762159 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:18 crc kubenswrapper[4890]: I0121 16:59:18.763608 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:18 crc kubenswrapper[4890]: I0121 16:59:18.945568 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 16:59:20 crc kubenswrapper[4890]: I0121 16:59:20.001618 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.355121 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 16:59:23 crc kubenswrapper[4890]: E0121 16:59:23.355963 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="extract-utilities" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.355991 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="extract-utilities" Jan 21 16:59:23 crc kubenswrapper[4890]: E0121 16:59:23.356032 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="extract-content" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.356044 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="extract-content" Jan 21 16:59:23 crc kubenswrapper[4890]: E0121 16:59:23.356059 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="extract-content" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.356070 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="extract-content" Jan 21 16:59:23 crc kubenswrapper[4890]: E0121 16:59:23.356084 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="registry-server" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.356096 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="registry-server" Jan 21 16:59:23 crc kubenswrapper[4890]: E0121 16:59:23.356123 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="registry-server" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.356178 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="registry-server" Jan 21 16:59:23 crc kubenswrapper[4890]: E0121 16:59:23.356203 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="extract-utilities" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.356214 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="extract-utilities" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.356526 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d2dc5d-6583-425a-8a8e-068ea7a715b2" containerName="registry-server" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.356554 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d7c371-48e7-4378-8e77-ca630199e3fb" containerName="registry-server" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.357342 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.360870 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-95wx6" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.364454 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.476618 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnntm\" (UniqueName: \"kubernetes.io/projected/576b65a3-05d5-45a8-9f64-198df4566e95-kube-api-access-fnntm\") pod \"mariadb-client\" (UID: \"576b65a3-05d5-45a8-9f64-198df4566e95\") " pod="openstack/mariadb-client" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.577915 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnntm\" (UniqueName: \"kubernetes.io/projected/576b65a3-05d5-45a8-9f64-198df4566e95-kube-api-access-fnntm\") pod \"mariadb-client\" (UID: \"576b65a3-05d5-45a8-9f64-198df4566e95\") " pod="openstack/mariadb-client" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.597974 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnntm\" (UniqueName: \"kubernetes.io/projected/576b65a3-05d5-45a8-9f64-198df4566e95-kube-api-access-fnntm\") pod \"mariadb-client\" (UID: \"576b65a3-05d5-45a8-9f64-198df4566e95\") " pod="openstack/mariadb-client" Jan 21 16:59:23 crc kubenswrapper[4890]: I0121 16:59:23.679187 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:59:24 crc kubenswrapper[4890]: I0121 16:59:24.174965 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:59:24 crc kubenswrapper[4890]: W0121 16:59:24.179549 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod576b65a3_05d5_45a8_9f64_198df4566e95.slice/crio-b091203defe22cf24e68ffd440ee9ef84cd3dfefe18e968510c0989538eed33d WatchSource:0}: Error finding container b091203defe22cf24e68ffd440ee9ef84cd3dfefe18e968510c0989538eed33d: Status 404 returned error can't find the container with id b091203defe22cf24e68ffd440ee9ef84cd3dfefe18e968510c0989538eed33d Jan 21 16:59:24 crc kubenswrapper[4890]: I0121 16:59:24.723441 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"576b65a3-05d5-45a8-9f64-198df4566e95","Type":"ContainerStarted","Data":"b091203defe22cf24e68ffd440ee9ef84cd3dfefe18e968510c0989538eed33d"} Jan 21 16:59:25 crc kubenswrapper[4890]: I0121 16:59:25.742882 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"576b65a3-05d5-45a8-9f64-198df4566e95","Type":"ContainerStarted","Data":"ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd"} Jan 21 16:59:25 crc kubenswrapper[4890]: I0121 16:59:25.762494 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.140523145 podStartE2EDuration="2.762473639s" podCreationTimestamp="2026-01-21 16:59:23 +0000 UTC" firstStartedPulling="2026-01-21 16:59:24.181769946 +0000 UTC m=+5246.543212355" lastFinishedPulling="2026-01-21 16:59:24.80372044 +0000 UTC m=+5247.165162849" observedRunningTime="2026-01-21 16:59:25.75489953 +0000 UTC m=+5248.116341939" watchObservedRunningTime="2026-01-21 16:59:25.762473639 +0000 UTC m=+5248.123916058" Jan 21 16:59:37 crc kubenswrapper[4890]: I0121 16:59:37.811722 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:59:37 crc kubenswrapper[4890]: I0121 16:59:37.812405 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="576b65a3-05d5-45a8-9f64-198df4566e95" containerName="mariadb-client" containerID="cri-o://ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd" gracePeriod=30 Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.262279 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.391055 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnntm\" (UniqueName: \"kubernetes.io/projected/576b65a3-05d5-45a8-9f64-198df4566e95-kube-api-access-fnntm\") pod \"576b65a3-05d5-45a8-9f64-198df4566e95\" (UID: \"576b65a3-05d5-45a8-9f64-198df4566e95\") " Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.396521 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576b65a3-05d5-45a8-9f64-198df4566e95-kube-api-access-fnntm" (OuterVolumeSpecName: "kube-api-access-fnntm") pod "576b65a3-05d5-45a8-9f64-198df4566e95" (UID: "576b65a3-05d5-45a8-9f64-198df4566e95"). InnerVolumeSpecName "kube-api-access-fnntm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.493182 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnntm\" (UniqueName: \"kubernetes.io/projected/576b65a3-05d5-45a8-9f64-198df4566e95-kube-api-access-fnntm\") on node \"crc\" DevicePath \"\"" Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.836763 4890 generic.go:334] "Generic (PLEG): container finished" podID="576b65a3-05d5-45a8-9f64-198df4566e95" containerID="ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd" exitCode=143 Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.836815 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"576b65a3-05d5-45a8-9f64-198df4566e95","Type":"ContainerDied","Data":"ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd"} Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.836833 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.836872 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"576b65a3-05d5-45a8-9f64-198df4566e95","Type":"ContainerDied","Data":"b091203defe22cf24e68ffd440ee9ef84cd3dfefe18e968510c0989538eed33d"} Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.836894 4890 scope.go:117] "RemoveContainer" containerID="ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd" Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.859534 4890 scope.go:117] "RemoveContainer" containerID="ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd" Jan 21 16:59:38 crc kubenswrapper[4890]: E0121 16:59:38.860153 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd\": container with ID starting with ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd not found: ID does not exist" containerID="ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd" Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.860200 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd"} err="failed to get container status \"ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd\": rpc error: code = NotFound desc = could not find container \"ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd\": container with ID starting with ebbd2e91d071f9783d6f31b861e0891f4a29c0b145778bcff561fe3eea757bcd not found: ID does not exist" Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.872343 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:59:38 crc kubenswrapper[4890]: I0121 16:59:38.880197 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:59:39 crc kubenswrapper[4890]: I0121 16:59:39.934856 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576b65a3-05d5-45a8-9f64-198df4566e95" path="/var/lib/kubelet/pods/576b65a3-05d5-45a8-9f64-198df4566e95/volumes" Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.762301 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.762872 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.762927 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.763590 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.763639 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" gracePeriod=600 Jan 21 16:59:48 crc kubenswrapper[4890]: E0121 16:59:48.893089 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.923058 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" exitCode=0 Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.923119 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443"} Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.923173 4890 scope.go:117] "RemoveContainer" containerID="142c6fbaaaf0c0988b80a5cda216027a830094babae157afa2e11ed6dc30d815" Jan 21 16:59:48 crc kubenswrapper[4890]: I0121 16:59:48.924038 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 16:59:48 crc kubenswrapper[4890]: E0121 16:59:48.924318 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 16:59:59 crc kubenswrapper[4890]: I0121 16:59:59.914051 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 16:59:59 crc kubenswrapper[4890]: E0121 16:59:59.914782 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.159792 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg"] Jan 21 17:00:00 crc kubenswrapper[4890]: E0121 17:00:00.160153 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576b65a3-05d5-45a8-9f64-198df4566e95" containerName="mariadb-client" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.160179 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="576b65a3-05d5-45a8-9f64-198df4566e95" containerName="mariadb-client" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.160385 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="576b65a3-05d5-45a8-9f64-198df4566e95" containerName="mariadb-client" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.162094 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.165132 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.165310 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.170112 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg"] Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.224048 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-secret-volume\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.224548 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-config-volume\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.224676 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmjh\" (UniqueName: \"kubernetes.io/projected/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-kube-api-access-czmjh\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.326050 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmjh\" (UniqueName: \"kubernetes.io/projected/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-kube-api-access-czmjh\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.326156 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-secret-volume\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.326188 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-config-volume\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.327243 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-config-volume\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.333192 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-secret-volume\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.346186 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmjh\" (UniqueName: \"kubernetes.io/projected/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-kube-api-access-czmjh\") pod \"collect-profiles-29483580-d4hqg\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.486718 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:00 crc kubenswrapper[4890]: I0121 17:00:00.948710 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg"] Jan 21 17:00:01 crc kubenswrapper[4890]: I0121 17:00:01.012394 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" event={"ID":"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f","Type":"ContainerStarted","Data":"d262eeff06abf35466f89544239122d9592ca3b6363131b0acd756b586f8bbfc"} Jan 21 17:00:02 crc kubenswrapper[4890]: I0121 17:00:02.022993 4890 generic.go:334] "Generic (PLEG): container finished" podID="2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f" containerID="546d9d19ac74ff8739504a3d37a12f9848895c08c6b679678eaa681e16a39b48" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4890]: I0121 17:00:02.023111 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" event={"ID":"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f","Type":"ContainerDied","Data":"546d9d19ac74ff8739504a3d37a12f9848895c08c6b679678eaa681e16a39b48"} Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.317023 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.375455 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czmjh\" (UniqueName: \"kubernetes.io/projected/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-kube-api-access-czmjh\") pod \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.375540 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-secret-volume\") pod \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.375659 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-config-volume\") pod \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\" (UID: \"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f\") " Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.376567 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f" (UID: "2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.380637 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-kube-api-access-czmjh" (OuterVolumeSpecName: "kube-api-access-czmjh") pod "2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f" (UID: "2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f"). InnerVolumeSpecName "kube-api-access-czmjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.380658 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f" (UID: "2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.478015 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.478073 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czmjh\" (UniqueName: \"kubernetes.io/projected/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-kube-api-access-czmjh\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4890]: I0121 17:00:03.478090 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4890]: I0121 17:00:04.038659 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" event={"ID":"2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f","Type":"ContainerDied","Data":"d262eeff06abf35466f89544239122d9592ca3b6363131b0acd756b586f8bbfc"} Jan 21 17:00:04 crc kubenswrapper[4890]: I0121 17:00:04.038705 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d262eeff06abf35466f89544239122d9592ca3b6363131b0acd756b586f8bbfc" Jan 21 17:00:04 crc kubenswrapper[4890]: I0121 17:00:04.038709 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-d4hqg" Jan 21 17:00:04 crc kubenswrapper[4890]: I0121 17:00:04.383965 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs"] Jan 21 17:00:04 crc kubenswrapper[4890]: I0121 17:00:04.390716 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-nkmgs"] Jan 21 17:00:05 crc kubenswrapper[4890]: I0121 17:00:05.923963 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5f18fc-a813-4c74-ade7-0d3ef6c53d87" path="/var/lib/kubelet/pods/5b5f18fc-a813-4c74-ade7-0d3ef6c53d87/volumes" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.589288 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-69n8h"] Jan 21 17:00:10 crc kubenswrapper[4890]: E0121 17:00:10.590573 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f" containerName="collect-profiles" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.590590 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f" containerName="collect-profiles" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.590761 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef80f8d-c492-4ac1-ac49-57d8d8b35d1f" containerName="collect-profiles" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.592117 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.602012 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69n8h"] Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.690899 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-catalog-content\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.690997 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwsq\" (UniqueName: \"kubernetes.io/projected/acada5fb-d4bc-43b9-a307-942c4dab08ac-kube-api-access-qrwsq\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.691022 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-utilities\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.792233 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-catalog-content\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.792332 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrwsq\" (UniqueName: \"kubernetes.io/projected/acada5fb-d4bc-43b9-a307-942c4dab08ac-kube-api-access-qrwsq\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.792365 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-utilities\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.793029 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-catalog-content\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.793035 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-utilities\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.819291 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrwsq\" (UniqueName: \"kubernetes.io/projected/acada5fb-d4bc-43b9-a307-942c4dab08ac-kube-api-access-qrwsq\") pod \"certified-operators-69n8h\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:10 crc kubenswrapper[4890]: I0121 17:00:10.918994 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:11 crc kubenswrapper[4890]: I0121 17:00:11.308496 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69n8h"] Jan 21 17:00:11 crc kubenswrapper[4890]: W0121 17:00:11.313761 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacada5fb_d4bc_43b9_a307_942c4dab08ac.slice/crio-e65c0c43689f25f4c85cd1700718a4f4afd980272d2ea5b3c8ed6020aeabaea6 WatchSource:0}: Error finding container e65c0c43689f25f4c85cd1700718a4f4afd980272d2ea5b3c8ed6020aeabaea6: Status 404 returned error can't find the container with id e65c0c43689f25f4c85cd1700718a4f4afd980272d2ea5b3c8ed6020aeabaea6 Jan 21 17:00:12 crc kubenswrapper[4890]: I0121 17:00:12.110380 4890 generic.go:334] "Generic (PLEG): container finished" podID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerID="0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497" exitCode=0 Jan 21 17:00:12 crc kubenswrapper[4890]: I0121 17:00:12.110489 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69n8h" event={"ID":"acada5fb-d4bc-43b9-a307-942c4dab08ac","Type":"ContainerDied","Data":"0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497"} Jan 21 17:00:12 crc kubenswrapper[4890]: I0121 17:00:12.110880 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69n8h" event={"ID":"acada5fb-d4bc-43b9-a307-942c4dab08ac","Type":"ContainerStarted","Data":"e65c0c43689f25f4c85cd1700718a4f4afd980272d2ea5b3c8ed6020aeabaea6"} Jan 21 17:00:12 crc kubenswrapper[4890]: I0121 17:00:12.914253 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:00:12 crc kubenswrapper[4890]: E0121 17:00:12.914496 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:00:14 crc kubenswrapper[4890]: I0121 17:00:14.125437 4890 generic.go:334] "Generic (PLEG): container finished" podID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerID="209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e" exitCode=0 Jan 21 17:00:14 crc kubenswrapper[4890]: I0121 17:00:14.125519 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69n8h" event={"ID":"acada5fb-d4bc-43b9-a307-942c4dab08ac","Type":"ContainerDied","Data":"209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e"} Jan 21 17:00:15 crc kubenswrapper[4890]: I0121 17:00:15.137875 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69n8h" event={"ID":"acada5fb-d4bc-43b9-a307-942c4dab08ac","Type":"ContainerStarted","Data":"6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749"} Jan 21 17:00:15 crc kubenswrapper[4890]: I0121 17:00:15.164283 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-69n8h" podStartSLOduration=2.680892123 podStartE2EDuration="5.164249856s" podCreationTimestamp="2026-01-21 17:00:10 +0000 UTC" firstStartedPulling="2026-01-21 17:00:12.113345714 +0000 UTC m=+5294.474788123" lastFinishedPulling="2026-01-21 17:00:14.596703447 +0000 UTC m=+5296.958145856" observedRunningTime="2026-01-21 17:00:15.157524608 +0000 UTC m=+5297.518967037" watchObservedRunningTime="2026-01-21 17:00:15.164249856 +0000 UTC m=+5297.525692265" Jan 21 17:00:18 crc kubenswrapper[4890]: I0121 17:00:18.568792 4890 scope.go:117] "RemoveContainer" containerID="67d0b258eab9d1234c7dcf0002c8c6d782d50270341f6881b73a53739639c90d" Jan 21 17:00:20 crc kubenswrapper[4890]: I0121 17:00:20.920551 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:20 crc kubenswrapper[4890]: I0121 17:00:20.922451 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:20 crc kubenswrapper[4890]: I0121 17:00:20.974935 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:21 crc kubenswrapper[4890]: I0121 17:00:21.260488 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:21 crc kubenswrapper[4890]: I0121 17:00:21.320306 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69n8h"] Jan 21 17:00:23 crc kubenswrapper[4890]: I0121 17:00:23.194060 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-69n8h" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="registry-server" containerID="cri-o://6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749" gracePeriod=2 Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.153780 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.204947 4890 generic.go:334] "Generic (PLEG): container finished" podID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerID="6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749" exitCode=0 Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.204992 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69n8h" event={"ID":"acada5fb-d4bc-43b9-a307-942c4dab08ac","Type":"ContainerDied","Data":"6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749"} Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.205005 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69n8h" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.205017 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69n8h" event={"ID":"acada5fb-d4bc-43b9-a307-942c4dab08ac","Type":"ContainerDied","Data":"e65c0c43689f25f4c85cd1700718a4f4afd980272d2ea5b3c8ed6020aeabaea6"} Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.205032 4890 scope.go:117] "RemoveContainer" containerID="6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.231296 4890 scope.go:117] "RemoveContainer" containerID="209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.249424 4890 scope.go:117] "RemoveContainer" containerID="0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.284651 4890 scope.go:117] "RemoveContainer" containerID="6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749" Jan 21 17:00:24 crc kubenswrapper[4890]: E0121 17:00:24.285206 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749\": container with ID starting with 6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749 not found: ID does not exist" containerID="6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.285269 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749"} err="failed to get container status \"6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749\": rpc error: code = NotFound desc = could not find container \"6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749\": container with ID starting with 6b51384771f48caa28cb5018bb4699b0ca91c424409bdc6eaf97f883913b1749 not found: ID does not exist" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.285308 4890 scope.go:117] "RemoveContainer" containerID="209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e" Jan 21 17:00:24 crc kubenswrapper[4890]: E0121 17:00:24.285822 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e\": container with ID starting with 209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e not found: ID does not exist" containerID="209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.285884 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e"} err="failed to get container status \"209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e\": rpc error: code = NotFound desc = could not find container \"209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e\": container with ID starting with 209d2b900a240705685a69741b8f09e875907facd97d05556b710e28a93eae8e not found: ID does not exist" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.285925 4890 scope.go:117] "RemoveContainer" containerID="0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497" Jan 21 17:00:24 crc kubenswrapper[4890]: E0121 17:00:24.286426 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497\": container with ID starting with 0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497 not found: ID does not exist" containerID="0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.286565 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497"} err="failed to get container status \"0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497\": rpc error: code = NotFound desc = could not find container \"0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497\": container with ID starting with 0b2a697882e44fb56378088f9520f9fe12ab51af698918f9c79b5451deb55497 not found: ID does not exist" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.311329 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-utilities\") pod \"acada5fb-d4bc-43b9-a307-942c4dab08ac\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.311427 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrwsq\" (UniqueName: \"kubernetes.io/projected/acada5fb-d4bc-43b9-a307-942c4dab08ac-kube-api-access-qrwsq\") pod \"acada5fb-d4bc-43b9-a307-942c4dab08ac\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.311519 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-catalog-content\") pod \"acada5fb-d4bc-43b9-a307-942c4dab08ac\" (UID: \"acada5fb-d4bc-43b9-a307-942c4dab08ac\") " Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.312747 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-utilities" (OuterVolumeSpecName: "utilities") pod "acada5fb-d4bc-43b9-a307-942c4dab08ac" (UID: "acada5fb-d4bc-43b9-a307-942c4dab08ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.318929 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acada5fb-d4bc-43b9-a307-942c4dab08ac-kube-api-access-qrwsq" (OuterVolumeSpecName: "kube-api-access-qrwsq") pod "acada5fb-d4bc-43b9-a307-942c4dab08ac" (UID: "acada5fb-d4bc-43b9-a307-942c4dab08ac"). InnerVolumeSpecName "kube-api-access-qrwsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.413841 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.413892 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrwsq\" (UniqueName: \"kubernetes.io/projected/acada5fb-d4bc-43b9-a307-942c4dab08ac-kube-api-access-qrwsq\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.543962 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acada5fb-d4bc-43b9-a307-942c4dab08ac" (UID: "acada5fb-d4bc-43b9-a307-942c4dab08ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.616595 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acada5fb-d4bc-43b9-a307-942c4dab08ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.837930 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69n8h"] Jan 21 17:00:24 crc kubenswrapper[4890]: I0121 17:00:24.848168 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-69n8h"] Jan 21 17:00:25 crc kubenswrapper[4890]: I0121 17:00:25.922372 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" path="/var/lib/kubelet/pods/acada5fb-d4bc-43b9-a307-942c4dab08ac/volumes" Jan 21 17:00:27 crc kubenswrapper[4890]: I0121 17:00:27.918432 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:00:27 crc kubenswrapper[4890]: E0121 17:00:27.918804 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:00:42 crc kubenswrapper[4890]: I0121 17:00:42.914307 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:00:42 crc kubenswrapper[4890]: E0121 17:00:42.915251 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:00:54 crc kubenswrapper[4890]: I0121 17:00:54.914214 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:00:54 crc kubenswrapper[4890]: E0121 17:00:54.915130 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:01:07 crc kubenswrapper[4890]: I0121 17:01:07.917560 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:01:07 crc kubenswrapper[4890]: E0121 17:01:07.918201 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:01:18 crc kubenswrapper[4890]: I0121 17:01:18.632725 4890 scope.go:117] "RemoveContainer" containerID="db61e0bc92dae4b4bcebc38b96b466c861295bf121b78b0b95c56980ef0cf04e" Jan 21 17:01:19 crc kubenswrapper[4890]: I0121 17:01:19.914123 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:01:19 crc kubenswrapper[4890]: E0121 17:01:19.914789 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:01:34 crc kubenswrapper[4890]: I0121 17:01:34.915324 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:01:34 crc kubenswrapper[4890]: E0121 17:01:34.916436 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:01:46 crc kubenswrapper[4890]: I0121 17:01:46.914394 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:01:46 crc kubenswrapper[4890]: E0121 17:01:46.915102 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:02:00 crc kubenswrapper[4890]: I0121 17:02:00.913986 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:02:00 crc kubenswrapper[4890]: E0121 17:02:00.914788 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:02:14 crc kubenswrapper[4890]: I0121 17:02:14.913583 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:02:14 crc kubenswrapper[4890]: E0121 17:02:14.914335 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:02:26 crc kubenswrapper[4890]: I0121 17:02:26.914573 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:02:26 crc kubenswrapper[4890]: E0121 17:02:26.915277 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.527593 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bnqhg"] Jan 21 17:02:39 crc kubenswrapper[4890]: E0121 17:02:39.528537 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="extract-content" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.528555 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="extract-content" Jan 21 17:02:39 crc kubenswrapper[4890]: E0121 17:02:39.528578 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="registry-server" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.528586 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="registry-server" Jan 21 17:02:39 crc kubenswrapper[4890]: E0121 17:02:39.528603 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="extract-utilities" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.528612 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="extract-utilities" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.528805 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="acada5fb-d4bc-43b9-a307-942c4dab08ac" containerName="registry-server" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.530099 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.538024 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnqhg"] Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.622060 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-catalog-content\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.622108 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55g55\" (UniqueName: \"kubernetes.io/projected/42e308cf-6da3-4d37-b9d8-97a83c5c5071-kube-api-access-55g55\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.622155 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-utilities\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.723693 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-catalog-content\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.723763 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55g55\" (UniqueName: \"kubernetes.io/projected/42e308cf-6da3-4d37-b9d8-97a83c5c5071-kube-api-access-55g55\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.723834 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-utilities\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.724383 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-catalog-content\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.724461 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-utilities\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.761512 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55g55\" (UniqueName: \"kubernetes.io/projected/42e308cf-6da3-4d37-b9d8-97a83c5c5071-kube-api-access-55g55\") pod \"community-operators-bnqhg\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:39 crc kubenswrapper[4890]: I0121 17:02:39.849214 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:40 crc kubenswrapper[4890]: I0121 17:02:40.351538 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnqhg"] Jan 21 17:02:40 crc kubenswrapper[4890]: I0121 17:02:40.914027 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:02:40 crc kubenswrapper[4890]: E0121 17:02:40.914408 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:02:41 crc kubenswrapper[4890]: I0121 17:02:41.187236 4890 generic.go:334] "Generic (PLEG): container finished" podID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerID="5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402" exitCode=0 Jan 21 17:02:41 crc kubenswrapper[4890]: I0121 17:02:41.187281 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqhg" event={"ID":"42e308cf-6da3-4d37-b9d8-97a83c5c5071","Type":"ContainerDied","Data":"5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402"} Jan 21 17:02:41 crc kubenswrapper[4890]: I0121 17:02:41.187306 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqhg" event={"ID":"42e308cf-6da3-4d37-b9d8-97a83c5c5071","Type":"ContainerStarted","Data":"51d48dbc351f6811f5d77bc0f2adf57e69c6aa5c65821bdf9cf55996c33bac09"} Jan 21 17:02:42 crc kubenswrapper[4890]: I0121 17:02:42.195664 4890 generic.go:334] "Generic (PLEG): container finished" podID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerID="b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8" exitCode=0 Jan 21 17:02:42 crc kubenswrapper[4890]: I0121 17:02:42.195732 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqhg" event={"ID":"42e308cf-6da3-4d37-b9d8-97a83c5c5071","Type":"ContainerDied","Data":"b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8"} Jan 21 17:02:43 crc kubenswrapper[4890]: I0121 17:02:43.205365 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqhg" event={"ID":"42e308cf-6da3-4d37-b9d8-97a83c5c5071","Type":"ContainerStarted","Data":"09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf"} Jan 21 17:02:43 crc kubenswrapper[4890]: I0121 17:02:43.222439 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bnqhg" podStartSLOduration=2.824348937 podStartE2EDuration="4.222423193s" podCreationTimestamp="2026-01-21 17:02:39 +0000 UTC" firstStartedPulling="2026-01-21 17:02:41.188704343 +0000 UTC m=+5443.550146752" lastFinishedPulling="2026-01-21 17:02:42.586778599 +0000 UTC m=+5444.948221008" observedRunningTime="2026-01-21 17:02:43.221624543 +0000 UTC m=+5445.583066952" watchObservedRunningTime="2026-01-21 17:02:43.222423193 +0000 UTC m=+5445.583865602" Jan 21 17:02:49 crc kubenswrapper[4890]: I0121 17:02:49.849798 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:49 crc kubenswrapper[4890]: I0121 17:02:49.850454 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:49 crc kubenswrapper[4890]: I0121 17:02:49.904152 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:50 crc kubenswrapper[4890]: I0121 17:02:50.322815 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:50 crc kubenswrapper[4890]: I0121 17:02:50.364941 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnqhg"] Jan 21 17:02:52 crc kubenswrapper[4890]: I0121 17:02:52.296293 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bnqhg" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="registry-server" containerID="cri-o://09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf" gracePeriod=2 Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.171305 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.252920 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-utilities\") pod \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.252999 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55g55\" (UniqueName: \"kubernetes.io/projected/42e308cf-6da3-4d37-b9d8-97a83c5c5071-kube-api-access-55g55\") pod \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.253100 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-catalog-content\") pod \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\" (UID: \"42e308cf-6da3-4d37-b9d8-97a83c5c5071\") " Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.253965 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-utilities" (OuterVolumeSpecName: "utilities") pod "42e308cf-6da3-4d37-b9d8-97a83c5c5071" (UID: "42e308cf-6da3-4d37-b9d8-97a83c5c5071"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.259462 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e308cf-6da3-4d37-b9d8-97a83c5c5071-kube-api-access-55g55" (OuterVolumeSpecName: "kube-api-access-55g55") pod "42e308cf-6da3-4d37-b9d8-97a83c5c5071" (UID: "42e308cf-6da3-4d37-b9d8-97a83c5c5071"). InnerVolumeSpecName "kube-api-access-55g55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.306116 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42e308cf-6da3-4d37-b9d8-97a83c5c5071" (UID: "42e308cf-6da3-4d37-b9d8-97a83c5c5071"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.311962 4890 generic.go:334] "Generic (PLEG): container finished" podID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerID="09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf" exitCode=0 Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.312031 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqhg" event={"ID":"42e308cf-6da3-4d37-b9d8-97a83c5c5071","Type":"ContainerDied","Data":"09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf"} Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.312093 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqhg" event={"ID":"42e308cf-6da3-4d37-b9d8-97a83c5c5071","Type":"ContainerDied","Data":"51d48dbc351f6811f5d77bc0f2adf57e69c6aa5c65821bdf9cf55996c33bac09"} Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.312120 4890 scope.go:117] "RemoveContainer" containerID="09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.312127 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqhg" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.342930 4890 scope.go:117] "RemoveContainer" containerID="b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.352031 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnqhg"] Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.355562 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.355598 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e308cf-6da3-4d37-b9d8-97a83c5c5071-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.355613 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55g55\" (UniqueName: \"kubernetes.io/projected/42e308cf-6da3-4d37-b9d8-97a83c5c5071-kube-api-access-55g55\") on node \"crc\" DevicePath \"\"" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.360324 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bnqhg"] Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.361247 4890 scope.go:117] "RemoveContainer" containerID="5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.391694 4890 scope.go:117] "RemoveContainer" containerID="09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf" Jan 21 17:02:53 crc kubenswrapper[4890]: E0121 17:02:53.392103 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf\": container with ID starting with 09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf not found: ID does not exist" containerID="09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.392134 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf"} err="failed to get container status \"09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf\": rpc error: code = NotFound desc = could not find container \"09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf\": container with ID starting with 09ce9d7fb345155412c4d7061816de0acb036955280ff4ff8bd57189e99831cf not found: ID does not exist" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.392155 4890 scope.go:117] "RemoveContainer" containerID="b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8" Jan 21 17:02:53 crc kubenswrapper[4890]: E0121 17:02:53.392421 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8\": container with ID starting with b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8 not found: ID does not exist" containerID="b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.392472 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8"} err="failed to get container status \"b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8\": rpc error: code = NotFound desc = could not find container \"b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8\": container with ID starting with b591f611455e4fea20d3f9c6fe65b25c1653c27e44a93875451aa4fdd35afbb8 not found: ID does not exist" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.392499 4890 scope.go:117] "RemoveContainer" containerID="5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402" Jan 21 17:02:53 crc kubenswrapper[4890]: E0121 17:02:53.393257 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402\": container with ID starting with 5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402 not found: ID does not exist" containerID="5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.393281 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402"} err="failed to get container status \"5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402\": rpc error: code = NotFound desc = could not find container \"5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402\": container with ID starting with 5ec362ea15dd12a61a25c853321663db955f1d3a6112ee0e357790ad84a3f402 not found: ID does not exist" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.914214 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:02:53 crc kubenswrapper[4890]: E0121 17:02:53.914477 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:02:53 crc kubenswrapper[4890]: I0121 17:02:53.927084 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" path="/var/lib/kubelet/pods/42e308cf-6da3-4d37-b9d8-97a83c5c5071/volumes" Jan 21 17:03:07 crc kubenswrapper[4890]: I0121 17:03:07.917633 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:03:07 crc kubenswrapper[4890]: E0121 17:03:07.918220 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.705218 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 17:03:17 crc kubenswrapper[4890]: E0121 17:03:17.706061 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="extract-content" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.706074 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="extract-content" Jan 21 17:03:17 crc kubenswrapper[4890]: E0121 17:03:17.706097 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="extract-utilities" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.706107 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="extract-utilities" Jan 21 17:03:17 crc kubenswrapper[4890]: E0121 17:03:17.706129 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="registry-server" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.706136 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="registry-server" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.706331 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e308cf-6da3-4d37-b9d8-97a83c5c5071" containerName="registry-server" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.706841 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.709199 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-95wx6" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.716904 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.827118 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-579006aa-fe38-412e-ba70-71da097070eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-579006aa-fe38-412e-ba70-71da097070eb\") pod \"mariadb-copy-data\" (UID: \"58287e81-1deb-4dbc-a395-087a84f0830b\") " pod="openstack/mariadb-copy-data" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.827180 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nsmt\" (UniqueName: \"kubernetes.io/projected/58287e81-1deb-4dbc-a395-087a84f0830b-kube-api-access-2nsmt\") pod \"mariadb-copy-data\" (UID: \"58287e81-1deb-4dbc-a395-087a84f0830b\") " pod="openstack/mariadb-copy-data" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.928073 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-579006aa-fe38-412e-ba70-71da097070eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-579006aa-fe38-412e-ba70-71da097070eb\") pod \"mariadb-copy-data\" (UID: \"58287e81-1deb-4dbc-a395-087a84f0830b\") " pod="openstack/mariadb-copy-data" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.928125 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nsmt\" (UniqueName: \"kubernetes.io/projected/58287e81-1deb-4dbc-a395-087a84f0830b-kube-api-access-2nsmt\") pod \"mariadb-copy-data\" (UID: \"58287e81-1deb-4dbc-a395-087a84f0830b\") " pod="openstack/mariadb-copy-data" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.930928 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.930967 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-579006aa-fe38-412e-ba70-71da097070eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-579006aa-fe38-412e-ba70-71da097070eb\") pod \"mariadb-copy-data\" (UID: \"58287e81-1deb-4dbc-a395-087a84f0830b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/50526fd9ec597762c1dbf347b961f5d8aac0dfedc587eaab3f9f5cf095bf6ba2/globalmount\"" pod="openstack/mariadb-copy-data" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.953531 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nsmt\" (UniqueName: \"kubernetes.io/projected/58287e81-1deb-4dbc-a395-087a84f0830b-kube-api-access-2nsmt\") pod \"mariadb-copy-data\" (UID: \"58287e81-1deb-4dbc-a395-087a84f0830b\") " pod="openstack/mariadb-copy-data" Jan 21 17:03:17 crc kubenswrapper[4890]: I0121 17:03:17.959606 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-579006aa-fe38-412e-ba70-71da097070eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-579006aa-fe38-412e-ba70-71da097070eb\") pod \"mariadb-copy-data\" (UID: \"58287e81-1deb-4dbc-a395-087a84f0830b\") " pod="openstack/mariadb-copy-data" Jan 21 17:03:18 crc kubenswrapper[4890]: I0121 17:03:18.029468 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 17:03:18 crc kubenswrapper[4890]: I0121 17:03:18.500403 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 17:03:18 crc kubenswrapper[4890]: I0121 17:03:18.520866 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"58287e81-1deb-4dbc-a395-087a84f0830b","Type":"ContainerStarted","Data":"3c2574e010034ff3e3ac5e7a8ccb6745dbcdb6673b73ce4ad517e413191a2f61"} Jan 21 17:03:19 crc kubenswrapper[4890]: I0121 17:03:19.530047 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"58287e81-1deb-4dbc-a395-087a84f0830b","Type":"ContainerStarted","Data":"fa7eb0c75e318b772c5410fd00f1ce2cbe934e7a2b2076c1ade413e30a24ebcf"} Jan 21 17:03:19 crc kubenswrapper[4890]: I0121 17:03:19.543255 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.543235508 podStartE2EDuration="3.543235508s" podCreationTimestamp="2026-01-21 17:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:03:19.542794767 +0000 UTC m=+5481.904237176" watchObservedRunningTime="2026-01-21 17:03:19.543235508 +0000 UTC m=+5481.904677917" Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.448335 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.450788 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.458055 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.610425 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns8nl\" (UniqueName: \"kubernetes.io/projected/d6e81c47-e3b9-4d9a-8e45-8351eb01c07a-kube-api-access-ns8nl\") pod \"mariadb-client\" (UID: \"d6e81c47-e3b9-4d9a-8e45-8351eb01c07a\") " pod="openstack/mariadb-client" Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.711607 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns8nl\" (UniqueName: \"kubernetes.io/projected/d6e81c47-e3b9-4d9a-8e45-8351eb01c07a-kube-api-access-ns8nl\") pod \"mariadb-client\" (UID: \"d6e81c47-e3b9-4d9a-8e45-8351eb01c07a\") " pod="openstack/mariadb-client" Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.734548 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns8nl\" (UniqueName: \"kubernetes.io/projected/d6e81c47-e3b9-4d9a-8e45-8351eb01c07a-kube-api-access-ns8nl\") pod \"mariadb-client\" (UID: \"d6e81c47-e3b9-4d9a-8e45-8351eb01c07a\") " pod="openstack/mariadb-client" Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.772980 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:22 crc kubenswrapper[4890]: I0121 17:03:22.915479 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:03:22 crc kubenswrapper[4890]: E0121 17:03:22.916255 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:03:23 crc kubenswrapper[4890]: I0121 17:03:23.191895 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:23 crc kubenswrapper[4890]: W0121 17:03:23.200324 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6e81c47_e3b9_4d9a_8e45_8351eb01c07a.slice/crio-91ea12988e8d49e776248b9d0572e6ceeefd112d04bb761c4113ff5a41e02031 WatchSource:0}: Error finding container 91ea12988e8d49e776248b9d0572e6ceeefd112d04bb761c4113ff5a41e02031: Status 404 returned error can't find the container with id 91ea12988e8d49e776248b9d0572e6ceeefd112d04bb761c4113ff5a41e02031 Jan 21 17:03:23 crc kubenswrapper[4890]: I0121 17:03:23.560191 4890 generic.go:334] "Generic (PLEG): container finished" podID="d6e81c47-e3b9-4d9a-8e45-8351eb01c07a" containerID="206d1a5eca57087274f0adb0394363fb034dcbf7d265fc3e4da99fceeb89bb25" exitCode=0 Jan 21 17:03:23 crc kubenswrapper[4890]: I0121 17:03:23.560285 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d6e81c47-e3b9-4d9a-8e45-8351eb01c07a","Type":"ContainerDied","Data":"206d1a5eca57087274f0adb0394363fb034dcbf7d265fc3e4da99fceeb89bb25"} Jan 21 17:03:23 crc kubenswrapper[4890]: I0121 17:03:23.560547 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d6e81c47-e3b9-4d9a-8e45-8351eb01c07a","Type":"ContainerStarted","Data":"91ea12988e8d49e776248b9d0572e6ceeefd112d04bb761c4113ff5a41e02031"} Jan 21 17:03:24 crc kubenswrapper[4890]: I0121 17:03:24.859230 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:24 crc kubenswrapper[4890]: I0121 17:03:24.881575 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_d6e81c47-e3b9-4d9a-8e45-8351eb01c07a/mariadb-client/0.log" Jan 21 17:03:24 crc kubenswrapper[4890]: I0121 17:03:24.906133 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:24 crc kubenswrapper[4890]: I0121 17:03:24.912596 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:24 crc kubenswrapper[4890]: I0121 17:03:24.950448 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns8nl\" (UniqueName: \"kubernetes.io/projected/d6e81c47-e3b9-4d9a-8e45-8351eb01c07a-kube-api-access-ns8nl\") pod \"d6e81c47-e3b9-4d9a-8e45-8351eb01c07a\" (UID: \"d6e81c47-e3b9-4d9a-8e45-8351eb01c07a\") " Jan 21 17:03:24 crc kubenswrapper[4890]: I0121 17:03:24.954559 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e81c47-e3b9-4d9a-8e45-8351eb01c07a-kube-api-access-ns8nl" (OuterVolumeSpecName: "kube-api-access-ns8nl") pod "d6e81c47-e3b9-4d9a-8e45-8351eb01c07a" (UID: "d6e81c47-e3b9-4d9a-8e45-8351eb01c07a"). InnerVolumeSpecName "kube-api-access-ns8nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.039176 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:25 crc kubenswrapper[4890]: E0121 17:03:25.039599 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e81c47-e3b9-4d9a-8e45-8351eb01c07a" containerName="mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.039631 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e81c47-e3b9-4d9a-8e45-8351eb01c07a" containerName="mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.039797 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6e81c47-e3b9-4d9a-8e45-8351eb01c07a" containerName="mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.040394 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.047263 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.053019 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns8nl\" (UniqueName: \"kubernetes.io/projected/d6e81c47-e3b9-4d9a-8e45-8351eb01c07a-kube-api-access-ns8nl\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.154312 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8mkg\" (UniqueName: \"kubernetes.io/projected/af33cf4b-2e28-4891-84a7-f2c1d7cde213-kube-api-access-q8mkg\") pod \"mariadb-client\" (UID: \"af33cf4b-2e28-4891-84a7-f2c1d7cde213\") " pod="openstack/mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.256186 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8mkg\" (UniqueName: \"kubernetes.io/projected/af33cf4b-2e28-4891-84a7-f2c1d7cde213-kube-api-access-q8mkg\") pod \"mariadb-client\" (UID: \"af33cf4b-2e28-4891-84a7-f2c1d7cde213\") " pod="openstack/mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.282117 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8mkg\" (UniqueName: \"kubernetes.io/projected/af33cf4b-2e28-4891-84a7-f2c1d7cde213-kube-api-access-q8mkg\") pod \"mariadb-client\" (UID: \"af33cf4b-2e28-4891-84a7-f2c1d7cde213\") " pod="openstack/mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.365739 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.604400 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ea12988e8d49e776248b9d0572e6ceeefd112d04bb761c4113ff5a41e02031" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.604506 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.632261 4890 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="d6e81c47-e3b9-4d9a-8e45-8351eb01c07a" podUID="af33cf4b-2e28-4891-84a7-f2c1d7cde213" Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.860876 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:25 crc kubenswrapper[4890]: I0121 17:03:25.924781 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6e81c47-e3b9-4d9a-8e45-8351eb01c07a" path="/var/lib/kubelet/pods/d6e81c47-e3b9-4d9a-8e45-8351eb01c07a/volumes" Jan 21 17:03:26 crc kubenswrapper[4890]: I0121 17:03:26.613605 4890 generic.go:334] "Generic (PLEG): container finished" podID="af33cf4b-2e28-4891-84a7-f2c1d7cde213" containerID="3b21d9a91e998e4bc7645a8505a2f99d53bebfd03763083d3c43d93f49def3b3" exitCode=0 Jan 21 17:03:26 crc kubenswrapper[4890]: I0121 17:03:26.613705 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"af33cf4b-2e28-4891-84a7-f2c1d7cde213","Type":"ContainerDied","Data":"3b21d9a91e998e4bc7645a8505a2f99d53bebfd03763083d3c43d93f49def3b3"} Jan 21 17:03:26 crc kubenswrapper[4890]: I0121 17:03:26.613976 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"af33cf4b-2e28-4891-84a7-f2c1d7cde213","Type":"ContainerStarted","Data":"d00050a0d4d1c3eca018c26c310b5f5e94b2b78825b0e8c844a1afef24f5486e"} Jan 21 17:03:27 crc kubenswrapper[4890]: I0121 17:03:27.891898 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:27 crc kubenswrapper[4890]: I0121 17:03:27.911189 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_af33cf4b-2e28-4891-84a7-f2c1d7cde213/mariadb-client/0.log" Jan 21 17:03:27 crc kubenswrapper[4890]: I0121 17:03:27.936685 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:27 crc kubenswrapper[4890]: I0121 17:03:27.942532 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 17:03:28 crc kubenswrapper[4890]: I0121 17:03:28.005177 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8mkg\" (UniqueName: \"kubernetes.io/projected/af33cf4b-2e28-4891-84a7-f2c1d7cde213-kube-api-access-q8mkg\") pod \"af33cf4b-2e28-4891-84a7-f2c1d7cde213\" (UID: \"af33cf4b-2e28-4891-84a7-f2c1d7cde213\") " Jan 21 17:03:28 crc kubenswrapper[4890]: I0121 17:03:28.010194 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33cf4b-2e28-4891-84a7-f2c1d7cde213-kube-api-access-q8mkg" (OuterVolumeSpecName: "kube-api-access-q8mkg") pod "af33cf4b-2e28-4891-84a7-f2c1d7cde213" (UID: "af33cf4b-2e28-4891-84a7-f2c1d7cde213"). InnerVolumeSpecName "kube-api-access-q8mkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:03:28 crc kubenswrapper[4890]: I0121 17:03:28.107489 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8mkg\" (UniqueName: \"kubernetes.io/projected/af33cf4b-2e28-4891-84a7-f2c1d7cde213-kube-api-access-q8mkg\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:28 crc kubenswrapper[4890]: I0121 17:03:28.629441 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00050a0d4d1c3eca018c26c310b5f5e94b2b78825b0e8c844a1afef24f5486e" Jan 21 17:03:28 crc kubenswrapper[4890]: I0121 17:03:28.629522 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 17:03:29 crc kubenswrapper[4890]: I0121 17:03:29.923288 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33cf4b-2e28-4891-84a7-f2c1d7cde213" path="/var/lib/kubelet/pods/af33cf4b-2e28-4891-84a7-f2c1d7cde213/volumes" Jan 21 17:03:37 crc kubenswrapper[4890]: I0121 17:03:37.919371 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:03:37 crc kubenswrapper[4890]: E0121 17:03:37.920060 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:03:52 crc kubenswrapper[4890]: I0121 17:03:52.913670 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:03:52 crc kubenswrapper[4890]: E0121 17:03:52.915454 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.861125 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 17:04:00 crc kubenswrapper[4890]: E0121 17:04:00.862001 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af33cf4b-2e28-4891-84a7-f2c1d7cde213" containerName="mariadb-client" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.862018 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="af33cf4b-2e28-4891-84a7-f2c1d7cde213" containerName="mariadb-client" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.862225 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="af33cf4b-2e28-4891-84a7-f2c1d7cde213" containerName="mariadb-client" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.863242 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.878510 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.878656 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.878714 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.878987 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-znvwb" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.879101 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.889370 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.897931 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.899482 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.918606 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.931585 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.931843 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:00 crc kubenswrapper[4890]: I0121 17:04:00.955267 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.018180 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmrb\" (UniqueName: \"kubernetes.io/projected/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-kube-api-access-sfmrb\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.018546 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.018752 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.018951 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.019118 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqn8x\" (UniqueName: \"kubernetes.io/projected/8b5542db-440c-41a8-855f-4046ecda9ec8-kube-api-access-zqn8x\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.019271 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.019515 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.019890 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/703438eb-576e-4abc-b9ca-3ed7db28e8a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.020081 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98r6\" (UniqueName: \"kubernetes.io/projected/703438eb-576e-4abc-b9ca-3ed7db28e8a2-kube-api-access-w98r6\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.020244 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-config\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.020399 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/703438eb-576e-4abc-b9ca-3ed7db28e8a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.020647 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.020808 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.020961 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.021119 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5542db-440c-41a8-855f-4046ecda9ec8-config\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.021334 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-257af42e-1a83-4bdd-8912-ba55009c5860\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-257af42e-1a83-4bdd-8912-ba55009c5860\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.021590 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5542db-440c-41a8-855f-4046ecda9ec8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.021765 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.021914 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.022070 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5542db-440c-41a8-855f-4046ecda9ec8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.022214 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.022465 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703438eb-576e-4abc-b9ca-3ed7db28e8a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.022619 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.022817 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124434 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124478 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5542db-440c-41a8-855f-4046ecda9ec8-config\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124504 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-257af42e-1a83-4bdd-8912-ba55009c5860\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-257af42e-1a83-4bdd-8912-ba55009c5860\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124525 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5542db-440c-41a8-855f-4046ecda9ec8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124560 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124582 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124596 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5542db-440c-41a8-855f-4046ecda9ec8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124611 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124637 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703438eb-576e-4abc-b9ca-3ed7db28e8a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124651 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124672 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124708 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfmrb\" (UniqueName: \"kubernetes.io/projected/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-kube-api-access-sfmrb\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124745 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124840 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124864 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124888 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqn8x\" (UniqueName: \"kubernetes.io/projected/8b5542db-440c-41a8-855f-4046ecda9ec8-kube-api-access-zqn8x\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124908 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124927 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124944 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/703438eb-576e-4abc-b9ca-3ed7db28e8a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124960 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98r6\" (UniqueName: \"kubernetes.io/projected/703438eb-576e-4abc-b9ca-3ed7db28e8a2-kube-api-access-w98r6\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124975 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-config\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.124992 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/703438eb-576e-4abc-b9ca-3ed7db28e8a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.125015 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.125032 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.127151 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5542db-440c-41a8-855f-4046ecda9ec8-config\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.127392 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.127559 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5542db-440c-41a8-855f-4046ecda9ec8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.127465 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703438eb-576e-4abc-b9ca-3ed7db28e8a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.127878 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5542db-440c-41a8-855f-4046ecda9ec8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.128582 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-config\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.128838 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/703438eb-576e-4abc-b9ca-3ed7db28e8a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.129669 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/703438eb-576e-4abc-b9ca-3ed7db28e8a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.129815 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.132632 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.132685 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-257af42e-1a83-4bdd-8912-ba55009c5860\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-257af42e-1a83-4bdd-8912-ba55009c5860\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ec135c46a6dbcecae74fb1bb11c82f686f0523edc20a759bfd55a8049e63d2f8/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.133101 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.133454 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.133484 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.133493 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/62c9f56a334db42ea16814e4d98892c56dcf7d54cbeb03778ed9b2a1cdd00dcc/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.133676 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.134055 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.134074 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db8ec487d3b3bd04638d27489e205c15b5902c90753e7ae5f505ba0794cd5978/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.139311 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.142961 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.143321 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.143563 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5542db-440c-41a8-855f-4046ecda9ec8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.143827 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/703438eb-576e-4abc-b9ca-3ed7db28e8a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.144193 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.149075 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqn8x\" (UniqueName: \"kubernetes.io/projected/8b5542db-440c-41a8-855f-4046ecda9ec8-kube-api-access-zqn8x\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.150267 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98r6\" (UniqueName: \"kubernetes.io/projected/703438eb-576e-4abc-b9ca-3ed7db28e8a2-kube-api-access-w98r6\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.156077 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfmrb\" (UniqueName: \"kubernetes.io/projected/b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187-kube-api-access-sfmrb\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.166243 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247cb060-5d35-4e0b-8740-c4e69e1f224a\") pod \"ovsdbserver-nb-1\" (UID: \"8b5542db-440c-41a8-855f-4046ecda9ec8\") " pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.166261 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-257af42e-1a83-4bdd-8912-ba55009c5860\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-257af42e-1a83-4bdd-8912-ba55009c5860\") pod \"ovsdbserver-nb-0\" (UID: \"703438eb-576e-4abc-b9ca-3ed7db28e8a2\") " pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.166917 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c4170cb-8474-4b22-a5e9-ec9aed860cfe\") pod \"ovsdbserver-nb-2\" (UID: \"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187\") " pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.188760 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.216611 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.253943 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.754660 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 17:04:01 crc kubenswrapper[4890]: W0121 17:04:01.831781 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b5542db_440c_41a8_855f_4046ecda9ec8.slice/crio-5f6fc7110414e55b988b0a897153519597cf4359013f640e153f71f0c853c5b1 WatchSource:0}: Error finding container 5f6fc7110414e55b988b0a897153519597cf4359013f640e153f71f0c853c5b1: Status 404 returned error can't find the container with id 5f6fc7110414e55b988b0a897153519597cf4359013f640e153f71f0c853c5b1 Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.832498 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.873810 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"703438eb-576e-4abc-b9ca-3ed7db28e8a2","Type":"ContainerStarted","Data":"695bc3549661ff5e18c1c840d02cba99b61c2086a71ad97a867d6a5fe4463436"} Jan 21 17:04:01 crc kubenswrapper[4890]: I0121 17:04:01.875617 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8b5542db-440c-41a8-855f-4046ecda9ec8","Type":"ContainerStarted","Data":"5f6fc7110414e55b988b0a897153519597cf4359013f640e153f71f0c853c5b1"} Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.708961 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.710603 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.712677 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.713948 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4t82f" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.715150 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.715835 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.723571 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.737614 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.739186 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.745728 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.746999 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.773244 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.776974 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.790583 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862055 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862108 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnp9h\" (UniqueName: \"kubernetes.io/projected/da8d3913-19b9-4fdc-ad72-8866d73f5139-kube-api-access-cnp9h\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862147 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8d3913-19b9-4fdc-ad72-8866d73f5139-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862178 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862194 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8d3913-19b9-4fdc-ad72-8866d73f5139-config\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862216 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862241 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862345 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1321d6ed-786c-45ea-bacc-14ba6afa47e5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862407 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8xp8\" (UniqueName: \"kubernetes.io/projected/1321d6ed-786c-45ea-bacc-14ba6afa47e5-kube-api-access-t8xp8\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862436 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862452 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6329e18-cfd2-46bf-862b-ba11f05e02fd-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862504 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862521 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862538 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1321d6ed-786c-45ea-bacc-14ba6afa47e5-config\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862601 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862628 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da8d3913-19b9-4fdc-ad72-8866d73f5139-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862671 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6329e18-cfd2-46bf-862b-ba11f05e02fd-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862699 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862726 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862752 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhw2\" (UniqueName: \"kubernetes.io/projected/a6329e18-cfd2-46bf-862b-ba11f05e02fd-kube-api-access-6jhw2\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862900 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862944 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321d6ed-786c-45ea-bacc-14ba6afa47e5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.862991 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.863016 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6329e18-cfd2-46bf-862b-ba11f05e02fd-config\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.909154 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8b5542db-440c-41a8-855f-4046ecda9ec8","Type":"ContainerStarted","Data":"b1a103c4851cace0ed68607ad3bbfd6e6883cff700c2a1a24d6d1780e2a4d251"} Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.909673 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8b5542db-440c-41a8-855f-4046ecda9ec8","Type":"ContainerStarted","Data":"cb7c08826c6f5044265a2cfe6f7a31affbfe2154d8d48434f4abd07c9e79f8b5"} Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.911579 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187","Type":"ContainerStarted","Data":"d66fb881bac8300a9397e82cc3b6dd9103c879fffc103afe68db83a3ed3982e5"} Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.914167 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"703438eb-576e-4abc-b9ca-3ed7db28e8a2","Type":"ContainerStarted","Data":"05843197bb622a4c65d96aa6608b04361784d845b971953f34881c88b2ca5f2e"} Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.914222 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"703438eb-576e-4abc-b9ca-3ed7db28e8a2","Type":"ContainerStarted","Data":"d92bedb6f7278c158b731b0865a6cbf3a40a1b5acfd15ecaad6e3b92956e3422"} Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.940227 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.940203545 podStartE2EDuration="3.940203545s" podCreationTimestamp="2026-01-21 17:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:02.931425137 +0000 UTC m=+5525.292867546" watchObservedRunningTime="2026-01-21 17:04:02.940203545 +0000 UTC m=+5525.301645954" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964289 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6329e18-cfd2-46bf-862b-ba11f05e02fd-config\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964371 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964399 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnp9h\" (UniqueName: \"kubernetes.io/projected/da8d3913-19b9-4fdc-ad72-8866d73f5139-kube-api-access-cnp9h\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964435 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8d3913-19b9-4fdc-ad72-8866d73f5139-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964459 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964476 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8d3913-19b9-4fdc-ad72-8866d73f5139-config\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964499 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964546 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964581 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1321d6ed-786c-45ea-bacc-14ba6afa47e5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964599 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xp8\" (UniqueName: \"kubernetes.io/projected/1321d6ed-786c-45ea-bacc-14ba6afa47e5-kube-api-access-t8xp8\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964621 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964637 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6329e18-cfd2-46bf-862b-ba11f05e02fd-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964655 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964670 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964685 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1321d6ed-786c-45ea-bacc-14ba6afa47e5-config\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964703 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da8d3913-19b9-4fdc-ad72-8866d73f5139-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964729 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964757 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6329e18-cfd2-46bf-862b-ba11f05e02fd-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964778 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964795 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964826 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhw2\" (UniqueName: \"kubernetes.io/projected/a6329e18-cfd2-46bf-862b-ba11f05e02fd-kube-api-access-6jhw2\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964868 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964954 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321d6ed-786c-45ea-bacc-14ba6afa47e5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.964975 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.965072 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.9650422130000003 podStartE2EDuration="3.965042213s" podCreationTimestamp="2026-01-21 17:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:02.958449829 +0000 UTC m=+5525.319892248" watchObservedRunningTime="2026-01-21 17:04:02.965042213 +0000 UTC m=+5525.326484632" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.965997 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6329e18-cfd2-46bf-862b-ba11f05e02fd-config\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.967717 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1321d6ed-786c-45ea-bacc-14ba6afa47e5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.970606 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.968729 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6329e18-cfd2-46bf-862b-ba11f05e02fd-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.969080 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6329e18-cfd2-46bf-862b-ba11f05e02fd-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.970272 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8d3913-19b9-4fdc-ad72-8866d73f5139-config\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.968698 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8d3913-19b9-4fdc-ad72-8866d73f5139-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.971433 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1321d6ed-786c-45ea-bacc-14ba6afa47e5-config\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.971559 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.971950 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da8d3913-19b9-4fdc-ad72-8866d73f5139-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.972036 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.972099 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.972121 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/29d619e0f0b421a02cb10f829693c0a839c2b18b216d8546c87f957f27c9fc7c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.972405 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.973003 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.973397 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.973720 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6329e18-cfd2-46bf-862b-ba11f05e02fd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.974383 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1321d6ed-786c-45ea-bacc-14ba6afa47e5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.974495 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321d6ed-786c-45ea-bacc-14ba6afa47e5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.975451 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8d3913-19b9-4fdc-ad72-8866d73f5139-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.982491 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xp8\" (UniqueName: \"kubernetes.io/projected/1321d6ed-786c-45ea-bacc-14ba6afa47e5-kube-api-access-t8xp8\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.985327 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnp9h\" (UniqueName: \"kubernetes.io/projected/da8d3913-19b9-4fdc-ad72-8866d73f5139-kube-api-access-cnp9h\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.985342 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.985406 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ee550cc6ef3f07c2c3525cb7589f2837b13ab30c9cae662a432638b4fecf9bc9/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.985362 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.985519 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9ca5bfc17dec7df27490ed12813fff2539cf46b34e993d7fca4b58dcae0bca8d/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:02 crc kubenswrapper[4890]: I0121 17:04:02.990693 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhw2\" (UniqueName: \"kubernetes.io/projected/a6329e18-cfd2-46bf-862b-ba11f05e02fd-kube-api-access-6jhw2\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.017692 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-42dda3af-d92a-4c2e-a046-76e9923fb448\") pod \"ovsdbserver-sb-2\" (UID: \"da8d3913-19b9-4fdc-ad72-8866d73f5139\") " pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.017692 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1e14a-9c08-495a-9bf3-0ae474b70ee6\") pod \"ovsdbserver-sb-0\" (UID: \"1321d6ed-786c-45ea-bacc-14ba6afa47e5\") " pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.024733 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a2f2e18-4506-4abf-99c6-2d49acb6b317\") pod \"ovsdbserver-sb-1\" (UID: \"a6329e18-cfd2-46bf-862b-ba11f05e02fd\") " pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.033673 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.055092 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.061294 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.589099 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 17:04:03 crc kubenswrapper[4890]: W0121 17:04:03.599533 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1321d6ed_786c_45ea_bacc_14ba6afa47e5.slice/crio-d99cb1a6cebc5ae3d9199d93a08c498d8a219d4aaa85d1d1bb6037b384e5ae82 WatchSource:0}: Error finding container d99cb1a6cebc5ae3d9199d93a08c498d8a219d4aaa85d1d1bb6037b384e5ae82: Status 404 returned error can't find the container with id d99cb1a6cebc5ae3d9199d93a08c498d8a219d4aaa85d1d1bb6037b384e5ae82 Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.684725 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 17:04:03 crc kubenswrapper[4890]: W0121 17:04:03.688210 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6329e18_cfd2_46bf_862b_ba11f05e02fd.slice/crio-a732d02f9c641042bdb5267557001c91739e8f39a1e46e974c9cdfe85774aacb WatchSource:0}: Error finding container a732d02f9c641042bdb5267557001c91739e8f39a1e46e974c9cdfe85774aacb: Status 404 returned error can't find the container with id a732d02f9c641042bdb5267557001c91739e8f39a1e46e974c9cdfe85774aacb Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.984786 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1321d6ed-786c-45ea-bacc-14ba6afa47e5","Type":"ContainerStarted","Data":"a08132d9e18c5c3614010742f1eded12c63cabb600b40fc0ba334324558c016e"} Jan 21 17:04:03 crc kubenswrapper[4890]: I0121 17:04:03.984838 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1321d6ed-786c-45ea-bacc-14ba6afa47e5","Type":"ContainerStarted","Data":"d99cb1a6cebc5ae3d9199d93a08c498d8a219d4aaa85d1d1bb6037b384e5ae82"} Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.009977 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"a6329e18-cfd2-46bf-862b-ba11f05e02fd","Type":"ContainerStarted","Data":"95eb738ddf033c3329e77ed977246fd2439aebc62fcc3fa4c25f3068be610bba"} Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.010029 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"a6329e18-cfd2-46bf-862b-ba11f05e02fd","Type":"ContainerStarted","Data":"a732d02f9c641042bdb5267557001c91739e8f39a1e46e974c9cdfe85774aacb"} Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.013692 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187","Type":"ContainerStarted","Data":"f4281f7ed680da564a16b6f1a3bed86b8718cf4283c672b03cfa8b335a48e4a2"} Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.013728 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187","Type":"ContainerStarted","Data":"072bc477a1a590b7421847477f5008f6040068b56f309274e6fdaa0c5691c4ad"} Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.068564 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=5.068541061 podStartE2EDuration="5.068541061s" podCreationTimestamp="2026-01-21 17:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:04.037900479 +0000 UTC m=+5526.399342888" watchObservedRunningTime="2026-01-21 17:04:04.068541061 +0000 UTC m=+5526.429983470" Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.189644 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.217263 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.224171 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.254859 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.288336 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:04 crc kubenswrapper[4890]: I0121 17:04:04.362210 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 17:04:04 crc kubenswrapper[4890]: W0121 17:04:04.364154 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda8d3913_19b9_4fdc_ad72_8866d73f5139.slice/crio-0dc9cd2c9fde5234ee3a77229055453693fd6012b269a391af9922f7a8b5a2c9 WatchSource:0}: Error finding container 0dc9cd2c9fde5234ee3a77229055453693fd6012b269a391af9922f7a8b5a2c9: Status 404 returned error can't find the container with id 0dc9cd2c9fde5234ee3a77229055453693fd6012b269a391af9922f7a8b5a2c9 Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.020693 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1321d6ed-786c-45ea-bacc-14ba6afa47e5","Type":"ContainerStarted","Data":"20adcaed22b46a643d9793d68ae7de46b7a71bebbe5c53849f184acc808f71ce"} Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.023801 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"da8d3913-19b9-4fdc-ad72-8866d73f5139","Type":"ContainerStarted","Data":"93c3c6bec48bfcb38c4ff82d87161386f529aefd684c5cc81965f1c7a205aeb4"} Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.023857 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"da8d3913-19b9-4fdc-ad72-8866d73f5139","Type":"ContainerStarted","Data":"39fa26099b5be4009c3c4535f080004bd47ac6354e23c737660f0d8a0713b171"} Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.023870 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"da8d3913-19b9-4fdc-ad72-8866d73f5139","Type":"ContainerStarted","Data":"0dc9cd2c9fde5234ee3a77229055453693fd6012b269a391af9922f7a8b5a2c9"} Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.028641 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"a6329e18-cfd2-46bf-862b-ba11f05e02fd","Type":"ContainerStarted","Data":"73fd7db147ef014a5ac96a3ead2034f21cccb4d7534838224f6fb9151b19e856"} Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.028686 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.029195 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.043185 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.043168502 podStartE2EDuration="4.043168502s" podCreationTimestamp="2026-01-21 17:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:05.038943947 +0000 UTC m=+5527.400386376" watchObservedRunningTime="2026-01-21 17:04:05.043168502 +0000 UTC m=+5527.404610911" Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.066703 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.066681237 podStartE2EDuration="4.066681237s" podCreationTimestamp="2026-01-21 17:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:05.058601146 +0000 UTC m=+5527.420043565" watchObservedRunningTime="2026-01-21 17:04:05.066681237 +0000 UTC m=+5527.428123646" Jan 21 17:04:05 crc kubenswrapper[4890]: I0121 17:04:05.077018 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.076995654 podStartE2EDuration="4.076995654s" podCreationTimestamp="2026-01-21 17:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:05.076415739 +0000 UTC m=+5527.437858148" watchObservedRunningTime="2026-01-21 17:04:05.076995654 +0000 UTC m=+5527.438438073" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.034198 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.055691 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.063524 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.073461 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.073518 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.073581 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.093512 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.218440 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.305569 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-q4fzf"] Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.307301 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.315106 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.339538 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-q4fzf"] Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.442198 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv948\" (UniqueName: \"kubernetes.io/projected/7f68f706-f9ab-4e77-9ff5-69c824d4d760-kube-api-access-vv948\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.442611 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.442662 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-dns-svc\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.442726 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-config\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.544600 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.544679 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-dns-svc\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.544746 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-config\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.544834 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv948\" (UniqueName: \"kubernetes.io/projected/7f68f706-f9ab-4e77-9ff5-69c824d4d760-kube-api-access-vv948\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.545802 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-dns-svc\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.545853 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-config\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.546287 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.565182 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv948\" (UniqueName: \"kubernetes.io/projected/7f68f706-f9ab-4e77-9ff5-69c824d4d760-kube-api-access-vv948\") pod \"dnsmasq-dns-8574559fdf-q4fzf\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:06 crc kubenswrapper[4890]: I0121 17:04:06.630679 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:07 crc kubenswrapper[4890]: I0121 17:04:07.049933 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:07 crc kubenswrapper[4890]: I0121 17:04:07.050572 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:07 crc kubenswrapper[4890]: I0121 17:04:07.055856 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-q4fzf"] Jan 21 17:04:07 crc kubenswrapper[4890]: I0121 17:04:07.255150 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:07 crc kubenswrapper[4890]: I0121 17:04:07.919413 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:04:07 crc kubenswrapper[4890]: E0121 17:04:07.920069 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.057538 4890 generic.go:334] "Generic (PLEG): container finished" podID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerID="149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68" exitCode=0 Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.057675 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" event={"ID":"7f68f706-f9ab-4e77-9ff5-69c824d4d760","Type":"ContainerDied","Data":"149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68"} Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.057715 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" event={"ID":"7f68f706-f9ab-4e77-9ff5-69c824d4d760","Type":"ContainerStarted","Data":"23e7c642bf0a9e3675a2e0d3f7d6572d490df01bb4d972115c2cbf597cc1df79"} Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.061438 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.095676 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.108049 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.114469 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.396843 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-q4fzf"] Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.422214 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-p999g"] Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.423785 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.432267 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.450832 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-p999g"] Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.593339 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-nb\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.593678 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-config\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.593733 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-sb\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.593765 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm7h\" (UniqueName: \"kubernetes.io/projected/abaf98b1-05e5-4669-904c-42de87a966f5-kube-api-access-9lm7h\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.593798 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-dns-svc\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.696466 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-sb\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.696545 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lm7h\" (UniqueName: \"kubernetes.io/projected/abaf98b1-05e5-4669-904c-42de87a966f5-kube-api-access-9lm7h\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.696605 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-dns-svc\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.696677 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-nb\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.696699 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-config\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.697594 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-config\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.697605 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-sb\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.698257 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-dns-svc\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.698844 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-nb\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.724533 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lm7h\" (UniqueName: \"kubernetes.io/projected/abaf98b1-05e5-4669-904c-42de87a966f5-kube-api-access-9lm7h\") pod \"dnsmasq-dns-69db5595f9-p999g\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.753925 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:08 crc kubenswrapper[4890]: I0121 17:04:08.990638 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-p999g"] Jan 21 17:04:09 crc kubenswrapper[4890]: W0121 17:04:09.006016 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabaf98b1_05e5_4669_904c_42de87a966f5.slice/crio-f53b9b61c60c10bb6edb4e6f25f21ed3a530ba8cc9c264f7e594b98fac1f3681 WatchSource:0}: Error finding container f53b9b61c60c10bb6edb4e6f25f21ed3a530ba8cc9c264f7e594b98fac1f3681: Status 404 returned error can't find the container with id f53b9b61c60c10bb6edb4e6f25f21ed3a530ba8cc9c264f7e594b98fac1f3681 Jan 21 17:04:09 crc kubenswrapper[4890]: I0121 17:04:09.066227 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-p999g" event={"ID":"abaf98b1-05e5-4669-904c-42de87a966f5","Type":"ContainerStarted","Data":"f53b9b61c60c10bb6edb4e6f25f21ed3a530ba8cc9c264f7e594b98fac1f3681"} Jan 21 17:04:09 crc kubenswrapper[4890]: I0121 17:04:09.069512 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" event={"ID":"7f68f706-f9ab-4e77-9ff5-69c824d4d760","Type":"ContainerStarted","Data":"c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166"} Jan 21 17:04:09 crc kubenswrapper[4890]: I0121 17:04:09.090598 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" podStartSLOduration=3.090578067 podStartE2EDuration="3.090578067s" podCreationTimestamp="2026-01-21 17:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:09.086424454 +0000 UTC m=+5531.447866873" watchObservedRunningTime="2026-01-21 17:04:09.090578067 +0000 UTC m=+5531.452020476" Jan 21 17:04:09 crc kubenswrapper[4890]: I0121 17:04:09.115486 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:09 crc kubenswrapper[4890]: I0121 17:04:09.162934 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.076403 4890 generic.go:334] "Generic (PLEG): container finished" podID="abaf98b1-05e5-4669-904c-42de87a966f5" containerID="df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47" exitCode=0 Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.077095 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-p999g" event={"ID":"abaf98b1-05e5-4669-904c-42de87a966f5","Type":"ContainerDied","Data":"df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47"} Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.077415 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.077424 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" podUID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerName="dnsmasq-dns" containerID="cri-o://c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166" gracePeriod=10 Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.512958 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.627762 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv948\" (UniqueName: \"kubernetes.io/projected/7f68f706-f9ab-4e77-9ff5-69c824d4d760-kube-api-access-vv948\") pod \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.627866 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-config\") pod \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.628102 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-dns-svc\") pod \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.628171 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-ovsdbserver-nb\") pod \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\" (UID: \"7f68f706-f9ab-4e77-9ff5-69c824d4d760\") " Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.661610 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f68f706-f9ab-4e77-9ff5-69c824d4d760-kube-api-access-vv948" (OuterVolumeSpecName: "kube-api-access-vv948") pod "7f68f706-f9ab-4e77-9ff5-69c824d4d760" (UID: "7f68f706-f9ab-4e77-9ff5-69c824d4d760"). InnerVolumeSpecName "kube-api-access-vv948". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.690373 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f68f706-f9ab-4e77-9ff5-69c824d4d760" (UID: "7f68f706-f9ab-4e77-9ff5-69c824d4d760"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.733038 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv948\" (UniqueName: \"kubernetes.io/projected/7f68f706-f9ab-4e77-9ff5-69c824d4d760-kube-api-access-vv948\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.733081 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.746874 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-config" (OuterVolumeSpecName: "config") pod "7f68f706-f9ab-4e77-9ff5-69c824d4d760" (UID: "7f68f706-f9ab-4e77-9ff5-69c824d4d760"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.754089 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f68f706-f9ab-4e77-9ff5-69c824d4d760" (UID: "7f68f706-f9ab-4e77-9ff5-69c824d4d760"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.835035 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:10 crc kubenswrapper[4890]: I0121 17:04:10.835078 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f68f706-f9ab-4e77-9ff5-69c824d4d760-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.086539 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-p999g" event={"ID":"abaf98b1-05e5-4669-904c-42de87a966f5","Type":"ContainerStarted","Data":"92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227"} Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.086619 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.088387 4890 generic.go:334] "Generic (PLEG): container finished" podID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerID="c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166" exitCode=0 Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.088452 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" event={"ID":"7f68f706-f9ab-4e77-9ff5-69c824d4d760","Type":"ContainerDied","Data":"c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166"} Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.088476 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" event={"ID":"7f68f706-f9ab-4e77-9ff5-69c824d4d760","Type":"ContainerDied","Data":"23e7c642bf0a9e3675a2e0d3f7d6572d490df01bb4d972115c2cbf597cc1df79"} Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.088478 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-q4fzf" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.088496 4890 scope.go:117] "RemoveContainer" containerID="c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.108368 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69db5595f9-p999g" podStartSLOduration=3.108335653 podStartE2EDuration="3.108335653s" podCreationTimestamp="2026-01-21 17:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:11.108150238 +0000 UTC m=+5533.469592687" watchObservedRunningTime="2026-01-21 17:04:11.108335653 +0000 UTC m=+5533.469778062" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.119313 4890 scope.go:117] "RemoveContainer" containerID="149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.128112 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-q4fzf"] Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.136846 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-q4fzf"] Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.153667 4890 scope.go:117] "RemoveContainer" containerID="c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166" Jan 21 17:04:11 crc kubenswrapper[4890]: E0121 17:04:11.154062 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166\": container with ID starting with c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166 not found: ID does not exist" containerID="c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.154104 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166"} err="failed to get container status \"c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166\": rpc error: code = NotFound desc = could not find container \"c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166\": container with ID starting with c66a7bb57dac429e746ad7c94d56d4dcf7836954b23bb728dd4b1efe3bfcb166 not found: ID does not exist" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.154128 4890 scope.go:117] "RemoveContainer" containerID="149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68" Jan 21 17:04:11 crc kubenswrapper[4890]: E0121 17:04:11.154576 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68\": container with ID starting with 149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68 not found: ID does not exist" containerID="149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.154592 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68"} err="failed to get container status \"149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68\": rpc error: code = NotFound desc = could not find container \"149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68\": container with ID starting with 149fd4b8d9074696a07799f10b24ee3874157694e353811f8cd5ad00c8a1eb68 not found: ID does not exist" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.382996 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 21 17:04:11 crc kubenswrapper[4890]: E0121 17:04:11.383418 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerName="dnsmasq-dns" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.383435 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerName="dnsmasq-dns" Jan 21 17:04:11 crc kubenswrapper[4890]: E0121 17:04:11.383474 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerName="init" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.383483 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerName="init" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.383681 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" containerName="dnsmasq-dns" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.384343 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.386643 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.390912 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.447490 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27j5c\" (UniqueName: \"kubernetes.io/projected/38309d11-5d19-4fcf-b65e-ba1c6acb4f69-kube-api-access-27j5c\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.447574 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/38309d11-5d19-4fcf-b65e-ba1c6acb4f69-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.447724 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.549290 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27j5c\" (UniqueName: \"kubernetes.io/projected/38309d11-5d19-4fcf-b65e-ba1c6acb4f69-kube-api-access-27j5c\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.549410 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/38309d11-5d19-4fcf-b65e-ba1c6acb4f69-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.549472 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.553042 4890 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.553078 4890 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5238a777a18eb01423718ddbe102f181a883c567e9c5134fdbc268f0bbb20ff3/globalmount\"" pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.561396 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/38309d11-5d19-4fcf-b65e-ba1c6acb4f69-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.565567 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27j5c\" (UniqueName: \"kubernetes.io/projected/38309d11-5d19-4fcf-b65e-ba1c6acb4f69-kube-api-access-27j5c\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.596196 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3e28cc1-410c-496d-b25a-b1dc8199ce7e\") pod \"ovn-copy-data\" (UID: \"38309d11-5d19-4fcf-b65e-ba1c6acb4f69\") " pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.706198 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 17:04:11 crc kubenswrapper[4890]: I0121 17:04:11.923850 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f68f706-f9ab-4e77-9ff5-69c824d4d760" path="/var/lib/kubelet/pods/7f68f706-f9ab-4e77-9ff5-69c824d4d760/volumes" Jan 21 17:04:12 crc kubenswrapper[4890]: I0121 17:04:12.219100 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 17:04:12 crc kubenswrapper[4890]: W0121 17:04:12.223843 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38309d11_5d19_4fcf_b65e_ba1c6acb4f69.slice/crio-6d8df819b1ad025f95436481d9398601508cd4bb87e0391aed887b3aa1250a5c WatchSource:0}: Error finding container 6d8df819b1ad025f95436481d9398601508cd4bb87e0391aed887b3aa1250a5c: Status 404 returned error can't find the container with id 6d8df819b1ad025f95436481d9398601508cd4bb87e0391aed887b3aa1250a5c Jan 21 17:04:12 crc kubenswrapper[4890]: I0121 17:04:12.225956 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:04:13 crc kubenswrapper[4890]: I0121 17:04:13.107169 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"38309d11-5d19-4fcf-b65e-ba1c6acb4f69","Type":"ContainerStarted","Data":"07f1d3aa0606ee8a46692528150b84891251debc011289ae44be62cee142529c"} Jan 21 17:04:13 crc kubenswrapper[4890]: I0121 17:04:13.107551 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"38309d11-5d19-4fcf-b65e-ba1c6acb4f69","Type":"ContainerStarted","Data":"6d8df819b1ad025f95436481d9398601508cd4bb87e0391aed887b3aa1250a5c"} Jan 21 17:04:13 crc kubenswrapper[4890]: I0121 17:04:13.120275 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.461963297 podStartE2EDuration="3.120251132s" podCreationTimestamp="2026-01-21 17:04:10 +0000 UTC" firstStartedPulling="2026-01-21 17:04:12.225743417 +0000 UTC m=+5534.587185826" lastFinishedPulling="2026-01-21 17:04:12.884031252 +0000 UTC m=+5535.245473661" observedRunningTime="2026-01-21 17:04:13.119593555 +0000 UTC m=+5535.481035974" watchObservedRunningTime="2026-01-21 17:04:13.120251132 +0000 UTC m=+5535.481693551" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.120203 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.122341 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.133066 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.133384 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.133507 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.133846 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pxkxl" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.145674 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.171812 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc77d57-1207-4646-a8cc-5855c7f15f91-config\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.171866 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm4sk\" (UniqueName: \"kubernetes.io/projected/3bc77d57-1207-4646-a8cc-5855c7f15f91-kube-api-access-jm4sk\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.171891 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.171942 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.171962 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc77d57-1207-4646-a8cc-5855c7f15f91-scripts\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.171979 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.172015 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3bc77d57-1207-4646-a8cc-5855c7f15f91-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.273617 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.273674 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc77d57-1207-4646-a8cc-5855c7f15f91-scripts\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.273703 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.273755 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3bc77d57-1207-4646-a8cc-5855c7f15f91-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.273828 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc77d57-1207-4646-a8cc-5855c7f15f91-config\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.273857 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm4sk\" (UniqueName: \"kubernetes.io/projected/3bc77d57-1207-4646-a8cc-5855c7f15f91-kube-api-access-jm4sk\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.273885 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.276076 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3bc77d57-1207-4646-a8cc-5855c7f15f91-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.276258 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc77d57-1207-4646-a8cc-5855c7f15f91-scripts\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.276595 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bc77d57-1207-4646-a8cc-5855c7f15f91-config\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.281074 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.282960 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.283766 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc77d57-1207-4646-a8cc-5855c7f15f91-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.294696 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm4sk\" (UniqueName: \"kubernetes.io/projected/3bc77d57-1207-4646-a8cc-5855c7f15f91-kube-api-access-jm4sk\") pod \"ovn-northd-0\" (UID: \"3bc77d57-1207-4646-a8cc-5855c7f15f91\") " pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.448903 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.754179 4890 scope.go:117] "RemoveContainer" containerID="ddd603ae2c788ecc2f7372dd6b57ac470f72fee50086fc76794d013e0718b226" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.755498 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.834494 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-b77mx"] Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.834763 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-b77mx" podUID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerName="dnsmasq-dns" containerID="cri-o://9bad55231cf2b6afbc5023006945a275739f16f2298dedd54693ee5a0d5d7fde" gracePeriod=10 Jan 21 17:04:18 crc kubenswrapper[4890]: I0121 17:04:18.964913 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.172437 4890 generic.go:334] "Generic (PLEG): container finished" podID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerID="9bad55231cf2b6afbc5023006945a275739f16f2298dedd54693ee5a0d5d7fde" exitCode=0 Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.174633 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-b77mx" event={"ID":"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b","Type":"ContainerDied","Data":"9bad55231cf2b6afbc5023006945a275739f16f2298dedd54693ee5a0d5d7fde"} Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.184184 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3bc77d57-1207-4646-a8cc-5855c7f15f91","Type":"ContainerStarted","Data":"126db2b71fa1dcc591e61d8da702fd612da4336e786218a98893aebb21c5559e"} Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.499039 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.616623 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-dns-svc\") pod \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.616946 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-config\") pod \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.617035 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlbg8\" (UniqueName: \"kubernetes.io/projected/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-kube-api-access-dlbg8\") pod \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\" (UID: \"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b\") " Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.620460 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-kube-api-access-dlbg8" (OuterVolumeSpecName: "kube-api-access-dlbg8") pod "65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" (UID: "65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b"). InnerVolumeSpecName "kube-api-access-dlbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.651928 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" (UID: "65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.662985 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-config" (OuterVolumeSpecName: "config") pod "65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" (UID: "65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.719752 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlbg8\" (UniqueName: \"kubernetes.io/projected/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-kube-api-access-dlbg8\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.719803 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.719816 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:19 crc kubenswrapper[4890]: I0121 17:04:19.914937 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:04:19 crc kubenswrapper[4890]: E0121 17:04:19.915580 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.190775 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-b77mx" event={"ID":"65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b","Type":"ContainerDied","Data":"84a5c528a0d2e91b7cc305373b91a4a25700cf881454e5b0780f41644d770608"} Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.190827 4890 scope.go:117] "RemoveContainer" containerID="9bad55231cf2b6afbc5023006945a275739f16f2298dedd54693ee5a0d5d7fde" Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.190965 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-b77mx" Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.195585 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3bc77d57-1207-4646-a8cc-5855c7f15f91","Type":"ContainerStarted","Data":"ce59209e0c32379bd9e43be9619c7bfce18eefad52ac6f4662780b05931e5a3a"} Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.195625 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3bc77d57-1207-4646-a8cc-5855c7f15f91","Type":"ContainerStarted","Data":"327611564da269350fcf961c83e7c5948408ad2482f7bf973d0f9f6413de634b"} Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.195957 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.212267 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-b77mx"] Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.217619 4890 scope.go:117] "RemoveContainer" containerID="85b521e5752cdd4e5ef8db6630d0e0d520cc843370b388c0e6ec03c6b912ebc7" Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.218593 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-b77mx"] Jan 21 17:04:20 crc kubenswrapper[4890]: I0121 17:04:20.234800 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.234783603 podStartE2EDuration="2.234783603s" podCreationTimestamp="2026-01-21 17:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:20.22704114 +0000 UTC m=+5542.588483599" watchObservedRunningTime="2026-01-21 17:04:20.234783603 +0000 UTC m=+5542.596226012" Jan 21 17:04:21 crc kubenswrapper[4890]: I0121 17:04:21.925391 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" path="/var/lib/kubelet/pods/65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b/volumes" Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.904312 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-d7hbm"] Jan 21 17:04:23 crc kubenswrapper[4890]: E0121 17:04:23.904929 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerName="dnsmasq-dns" Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.904942 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerName="dnsmasq-dns" Jan 21 17:04:23 crc kubenswrapper[4890]: E0121 17:04:23.904957 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerName="init" Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.904963 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerName="init" Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.905118 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b0a05f-5ab2-48bc-81c2-5d05cfedaa8b" containerName="dnsmasq-dns" Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.905660 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.912319 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7hbm"] Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.987814 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10501a22-53ef-4e70-9746-bea547a51fac-operator-scripts\") pod \"keystone-db-create-d7hbm\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:23 crc kubenswrapper[4890]: I0121 17:04:23.988045 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkcj\" (UniqueName: \"kubernetes.io/projected/10501a22-53ef-4e70-9746-bea547a51fac-kube-api-access-njkcj\") pod \"keystone-db-create-d7hbm\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.010256 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0a87-account-create-update-hv2jn"] Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.011490 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.014038 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.021208 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0a87-account-create-update-hv2jn"] Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.089268 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10501a22-53ef-4e70-9746-bea547a51fac-operator-scripts\") pod \"keystone-db-create-d7hbm\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.089373 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01496eb9-e1e6-45fc-872e-63b8be1baec4-operator-scripts\") pod \"keystone-0a87-account-create-update-hv2jn\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.089406 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkcj\" (UniqueName: \"kubernetes.io/projected/10501a22-53ef-4e70-9746-bea547a51fac-kube-api-access-njkcj\") pod \"keystone-db-create-d7hbm\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.089433 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5wsd\" (UniqueName: \"kubernetes.io/projected/01496eb9-e1e6-45fc-872e-63b8be1baec4-kube-api-access-q5wsd\") pod \"keystone-0a87-account-create-update-hv2jn\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.090126 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10501a22-53ef-4e70-9746-bea547a51fac-operator-scripts\") pod \"keystone-db-create-d7hbm\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.110536 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkcj\" (UniqueName: \"kubernetes.io/projected/10501a22-53ef-4e70-9746-bea547a51fac-kube-api-access-njkcj\") pod \"keystone-db-create-d7hbm\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.191300 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01496eb9-e1e6-45fc-872e-63b8be1baec4-operator-scripts\") pod \"keystone-0a87-account-create-update-hv2jn\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.191377 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5wsd\" (UniqueName: \"kubernetes.io/projected/01496eb9-e1e6-45fc-872e-63b8be1baec4-kube-api-access-q5wsd\") pod \"keystone-0a87-account-create-update-hv2jn\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.192007 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01496eb9-e1e6-45fc-872e-63b8be1baec4-operator-scripts\") pod \"keystone-0a87-account-create-update-hv2jn\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.208337 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5wsd\" (UniqueName: \"kubernetes.io/projected/01496eb9-e1e6-45fc-872e-63b8be1baec4-kube-api-access-q5wsd\") pod \"keystone-0a87-account-create-update-hv2jn\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.224750 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.326345 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.653982 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7hbm"] Jan 21 17:04:24 crc kubenswrapper[4890]: W0121 17:04:24.655046 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10501a22_53ef_4e70_9746_bea547a51fac.slice/crio-f8d36057f50bd94073589b78ec72ae39daed45d661ace32c8309622f54a0a9c4 WatchSource:0}: Error finding container f8d36057f50bd94073589b78ec72ae39daed45d661ace32c8309622f54a0a9c4: Status 404 returned error can't find the container with id f8d36057f50bd94073589b78ec72ae39daed45d661ace32c8309622f54a0a9c4 Jan 21 17:04:24 crc kubenswrapper[4890]: I0121 17:04:24.793907 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0a87-account-create-update-hv2jn"] Jan 21 17:04:24 crc kubenswrapper[4890]: W0121 17:04:24.796942 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01496eb9_e1e6_45fc_872e_63b8be1baec4.slice/crio-da97669650b15a5090b01884c5de49b8d0acbea1585f6f464cabc7d66092ddd0 WatchSource:0}: Error finding container da97669650b15a5090b01884c5de49b8d0acbea1585f6f464cabc7d66092ddd0: Status 404 returned error can't find the container with id da97669650b15a5090b01884c5de49b8d0acbea1585f6f464cabc7d66092ddd0 Jan 21 17:04:25 crc kubenswrapper[4890]: I0121 17:04:25.234317 4890 generic.go:334] "Generic (PLEG): container finished" podID="01496eb9-e1e6-45fc-872e-63b8be1baec4" containerID="96ce91110b466c157ba748f9a21d049e2dfa62a5ebc4c1d3a6300192670d7a1e" exitCode=0 Jan 21 17:04:25 crc kubenswrapper[4890]: I0121 17:04:25.234465 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a87-account-create-update-hv2jn" event={"ID":"01496eb9-e1e6-45fc-872e-63b8be1baec4","Type":"ContainerDied","Data":"96ce91110b466c157ba748f9a21d049e2dfa62a5ebc4c1d3a6300192670d7a1e"} Jan 21 17:04:25 crc kubenswrapper[4890]: I0121 17:04:25.234784 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a87-account-create-update-hv2jn" event={"ID":"01496eb9-e1e6-45fc-872e-63b8be1baec4","Type":"ContainerStarted","Data":"da97669650b15a5090b01884c5de49b8d0acbea1585f6f464cabc7d66092ddd0"} Jan 21 17:04:25 crc kubenswrapper[4890]: I0121 17:04:25.237310 4890 generic.go:334] "Generic (PLEG): container finished" podID="10501a22-53ef-4e70-9746-bea547a51fac" containerID="06cafcee89ef66240bf9d9fbe33af663eac9c9cdbf6401edb79113bc19f2bdd1" exitCode=0 Jan 21 17:04:25 crc kubenswrapper[4890]: I0121 17:04:25.237341 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7hbm" event={"ID":"10501a22-53ef-4e70-9746-bea547a51fac","Type":"ContainerDied","Data":"06cafcee89ef66240bf9d9fbe33af663eac9c9cdbf6401edb79113bc19f2bdd1"} Jan 21 17:04:25 crc kubenswrapper[4890]: I0121 17:04:25.237374 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7hbm" event={"ID":"10501a22-53ef-4e70-9746-bea547a51fac","Type":"ContainerStarted","Data":"f8d36057f50bd94073589b78ec72ae39daed45d661ace32c8309622f54a0a9c4"} Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.750710 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.758429 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.850826 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01496eb9-e1e6-45fc-872e-63b8be1baec4-operator-scripts\") pod \"01496eb9-e1e6-45fc-872e-63b8be1baec4\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.850906 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njkcj\" (UniqueName: \"kubernetes.io/projected/10501a22-53ef-4e70-9746-bea547a51fac-kube-api-access-njkcj\") pod \"10501a22-53ef-4e70-9746-bea547a51fac\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.850929 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10501a22-53ef-4e70-9746-bea547a51fac-operator-scripts\") pod \"10501a22-53ef-4e70-9746-bea547a51fac\" (UID: \"10501a22-53ef-4e70-9746-bea547a51fac\") " Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.851044 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5wsd\" (UniqueName: \"kubernetes.io/projected/01496eb9-e1e6-45fc-872e-63b8be1baec4-kube-api-access-q5wsd\") pod \"01496eb9-e1e6-45fc-872e-63b8be1baec4\" (UID: \"01496eb9-e1e6-45fc-872e-63b8be1baec4\") " Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.851633 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10501a22-53ef-4e70-9746-bea547a51fac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10501a22-53ef-4e70-9746-bea547a51fac" (UID: "10501a22-53ef-4e70-9746-bea547a51fac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.851962 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01496eb9-e1e6-45fc-872e-63b8be1baec4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01496eb9-e1e6-45fc-872e-63b8be1baec4" (UID: "01496eb9-e1e6-45fc-872e-63b8be1baec4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.855456 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01496eb9-e1e6-45fc-872e-63b8be1baec4-kube-api-access-q5wsd" (OuterVolumeSpecName: "kube-api-access-q5wsd") pod "01496eb9-e1e6-45fc-872e-63b8be1baec4" (UID: "01496eb9-e1e6-45fc-872e-63b8be1baec4"). InnerVolumeSpecName "kube-api-access-q5wsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.857974 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10501a22-53ef-4e70-9746-bea547a51fac-kube-api-access-njkcj" (OuterVolumeSpecName: "kube-api-access-njkcj") pod "10501a22-53ef-4e70-9746-bea547a51fac" (UID: "10501a22-53ef-4e70-9746-bea547a51fac"). InnerVolumeSpecName "kube-api-access-njkcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.954515 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01496eb9-e1e6-45fc-872e-63b8be1baec4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.954551 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njkcj\" (UniqueName: \"kubernetes.io/projected/10501a22-53ef-4e70-9746-bea547a51fac-kube-api-access-njkcj\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.954563 4890 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10501a22-53ef-4e70-9746-bea547a51fac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:26 crc kubenswrapper[4890]: I0121 17:04:26.954574 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5wsd\" (UniqueName: \"kubernetes.io/projected/01496eb9-e1e6-45fc-872e-63b8be1baec4-kube-api-access-q5wsd\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:27 crc kubenswrapper[4890]: I0121 17:04:27.251910 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7hbm" event={"ID":"10501a22-53ef-4e70-9746-bea547a51fac","Type":"ContainerDied","Data":"f8d36057f50bd94073589b78ec72ae39daed45d661ace32c8309622f54a0a9c4"} Jan 21 17:04:27 crc kubenswrapper[4890]: I0121 17:04:27.251961 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d36057f50bd94073589b78ec72ae39daed45d661ace32c8309622f54a0a9c4" Jan 21 17:04:27 crc kubenswrapper[4890]: I0121 17:04:27.252025 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7hbm" Jan 21 17:04:27 crc kubenswrapper[4890]: I0121 17:04:27.255540 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a87-account-create-update-hv2jn" event={"ID":"01496eb9-e1e6-45fc-872e-63b8be1baec4","Type":"ContainerDied","Data":"da97669650b15a5090b01884c5de49b8d0acbea1585f6f464cabc7d66092ddd0"} Jan 21 17:04:27 crc kubenswrapper[4890]: I0121 17:04:27.255575 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a87-account-create-update-hv2jn" Jan 21 17:04:27 crc kubenswrapper[4890]: I0121 17:04:27.255594 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da97669650b15a5090b01884c5de49b8d0acbea1585f6f464cabc7d66092ddd0" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.552477 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-7m788"] Jan 21 17:04:29 crc kubenswrapper[4890]: E0121 17:04:29.553133 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01496eb9-e1e6-45fc-872e-63b8be1baec4" containerName="mariadb-account-create-update" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.553147 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="01496eb9-e1e6-45fc-872e-63b8be1baec4" containerName="mariadb-account-create-update" Jan 21 17:04:29 crc kubenswrapper[4890]: E0121 17:04:29.553181 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10501a22-53ef-4e70-9746-bea547a51fac" containerName="mariadb-database-create" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.553190 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="10501a22-53ef-4e70-9746-bea547a51fac" containerName="mariadb-database-create" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.553378 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="01496eb9-e1e6-45fc-872e-63b8be1baec4" containerName="mariadb-account-create-update" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.553404 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="10501a22-53ef-4e70-9746-bea547a51fac" containerName="mariadb-database-create" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.554035 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.556807 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.557032 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kxwv" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.557268 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.564730 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7m788"] Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.566808 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.595585 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-config-data\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.595658 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7z2s\" (UniqueName: \"kubernetes.io/projected/ac707741-3e2a-4e3b-91d9-4506d55b585f-kube-api-access-s7z2s\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.595793 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-combined-ca-bundle\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.697321 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-combined-ca-bundle\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.697409 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-config-data\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.697444 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7z2s\" (UniqueName: \"kubernetes.io/projected/ac707741-3e2a-4e3b-91d9-4506d55b585f-kube-api-access-s7z2s\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.708279 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-combined-ca-bundle\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.708791 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-config-data\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.718186 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7z2s\" (UniqueName: \"kubernetes.io/projected/ac707741-3e2a-4e3b-91d9-4506d55b585f-kube-api-access-s7z2s\") pod \"keystone-db-sync-7m788\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:29 crc kubenswrapper[4890]: I0121 17:04:29.902502 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:30 crc kubenswrapper[4890]: W0121 17:04:30.346061 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac707741_3e2a_4e3b_91d9_4506d55b585f.slice/crio-6ea76907ad952d12e8378f6e43aa3dba19e02c23ad3750154ce99e3b840c177b WatchSource:0}: Error finding container 6ea76907ad952d12e8378f6e43aa3dba19e02c23ad3750154ce99e3b840c177b: Status 404 returned error can't find the container with id 6ea76907ad952d12e8378f6e43aa3dba19e02c23ad3750154ce99e3b840c177b Jan 21 17:04:30 crc kubenswrapper[4890]: I0121 17:04:30.346250 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7m788"] Jan 21 17:04:31 crc kubenswrapper[4890]: I0121 17:04:31.308019 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7m788" event={"ID":"ac707741-3e2a-4e3b-91d9-4506d55b585f","Type":"ContainerStarted","Data":"533ec7e9078c2460b2a603e0fb9b71a684635477497df281fc5620a685e81ec1"} Jan 21 17:04:31 crc kubenswrapper[4890]: I0121 17:04:31.308458 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7m788" event={"ID":"ac707741-3e2a-4e3b-91d9-4506d55b585f","Type":"ContainerStarted","Data":"6ea76907ad952d12e8378f6e43aa3dba19e02c23ad3750154ce99e3b840c177b"} Jan 21 17:04:31 crc kubenswrapper[4890]: I0121 17:04:31.340820 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-7m788" podStartSLOduration=2.340802549 podStartE2EDuration="2.340802549s" podCreationTimestamp="2026-01-21 17:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:31.329272092 +0000 UTC m=+5553.690714511" watchObservedRunningTime="2026-01-21 17:04:31.340802549 +0000 UTC m=+5553.702244958" Jan 21 17:04:32 crc kubenswrapper[4890]: I0121 17:04:32.315066 4890 generic.go:334] "Generic (PLEG): container finished" podID="ac707741-3e2a-4e3b-91d9-4506d55b585f" containerID="533ec7e9078c2460b2a603e0fb9b71a684635477497df281fc5620a685e81ec1" exitCode=0 Jan 21 17:04:32 crc kubenswrapper[4890]: I0121 17:04:32.315105 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7m788" event={"ID":"ac707741-3e2a-4e3b-91d9-4506d55b585f","Type":"ContainerDied","Data":"533ec7e9078c2460b2a603e0fb9b71a684635477497df281fc5620a685e81ec1"} Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.510039 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.688254 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.774514 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-combined-ca-bundle\") pod \"ac707741-3e2a-4e3b-91d9-4506d55b585f\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.774705 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7z2s\" (UniqueName: \"kubernetes.io/projected/ac707741-3e2a-4e3b-91d9-4506d55b585f-kube-api-access-s7z2s\") pod \"ac707741-3e2a-4e3b-91d9-4506d55b585f\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.774803 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-config-data\") pod \"ac707741-3e2a-4e3b-91d9-4506d55b585f\" (UID: \"ac707741-3e2a-4e3b-91d9-4506d55b585f\") " Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.779294 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac707741-3e2a-4e3b-91d9-4506d55b585f-kube-api-access-s7z2s" (OuterVolumeSpecName: "kube-api-access-s7z2s") pod "ac707741-3e2a-4e3b-91d9-4506d55b585f" (UID: "ac707741-3e2a-4e3b-91d9-4506d55b585f"). InnerVolumeSpecName "kube-api-access-s7z2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.799978 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac707741-3e2a-4e3b-91d9-4506d55b585f" (UID: "ac707741-3e2a-4e3b-91d9-4506d55b585f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.813267 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-config-data" (OuterVolumeSpecName: "config-data") pod "ac707741-3e2a-4e3b-91d9-4506d55b585f" (UID: "ac707741-3e2a-4e3b-91d9-4506d55b585f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.876814 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.876847 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7z2s\" (UniqueName: \"kubernetes.io/projected/ac707741-3e2a-4e3b-91d9-4506d55b585f-kube-api-access-s7z2s\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:33 crc kubenswrapper[4890]: I0121 17:04:33.876857 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac707741-3e2a-4e3b-91d9-4506d55b585f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.336048 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7m788" event={"ID":"ac707741-3e2a-4e3b-91d9-4506d55b585f","Type":"ContainerDied","Data":"6ea76907ad952d12e8378f6e43aa3dba19e02c23ad3750154ce99e3b840c177b"} Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.336087 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ea76907ad952d12e8378f6e43aa3dba19e02c23ad3750154ce99e3b840c177b" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.336105 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7m788" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.914541 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:04:34 crc kubenswrapper[4890]: E0121 17:04:34.915211 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.940547 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f667c44c-9pmw8"] Jan 21 17:04:34 crc kubenswrapper[4890]: E0121 17:04:34.940872 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac707741-3e2a-4e3b-91d9-4506d55b585f" containerName="keystone-db-sync" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.940891 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac707741-3e2a-4e3b-91d9-4506d55b585f" containerName="keystone-db-sync" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.941097 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac707741-3e2a-4e3b-91d9-4506d55b585f" containerName="keystone-db-sync" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.941910 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.962464 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f667c44c-9pmw8"] Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.989490 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4mkq4"] Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.990460 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994529 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-dns-svc\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994682 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-config\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994729 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kxwv" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994752 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-ovsdbserver-nb\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994770 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994793 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whhzn\" (UniqueName: \"kubernetes.io/projected/97c71702-fe9c-4370-8dd9-2b9a619e939b-kube-api-access-whhzn\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994861 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-ovsdbserver-sb\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.994729 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.995156 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 17:04:34 crc kubenswrapper[4890]: I0121 17:04:34.995498 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.029517 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4mkq4"] Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096600 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whhzn\" (UniqueName: \"kubernetes.io/projected/97c71702-fe9c-4370-8dd9-2b9a619e939b-kube-api-access-whhzn\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096688 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-combined-ca-bundle\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096721 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-config-data\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096783 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-ovsdbserver-sb\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096822 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwc7f\" (UniqueName: \"kubernetes.io/projected/e430a1b7-6ca9-4723-a766-797a6bcbd243-kube-api-access-wwc7f\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096860 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-dns-svc\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096897 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-fernet-keys\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096943 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-credential-keys\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.096980 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-config\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.097001 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-scripts\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.097060 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-ovsdbserver-nb\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.097629 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-ovsdbserver-sb\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.098941 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-dns-svc\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.099131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-config\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.099131 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c71702-fe9c-4370-8dd9-2b9a619e939b-ovsdbserver-nb\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.121206 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whhzn\" (UniqueName: \"kubernetes.io/projected/97c71702-fe9c-4370-8dd9-2b9a619e939b-kube-api-access-whhzn\") pod \"dnsmasq-dns-69f667c44c-9pmw8\" (UID: \"97c71702-fe9c-4370-8dd9-2b9a619e939b\") " pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.198444 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-fernet-keys\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.198514 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-credential-keys\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.198542 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-scripts\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.198612 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-combined-ca-bundle\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.198634 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-config-data\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.198675 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwc7f\" (UniqueName: \"kubernetes.io/projected/e430a1b7-6ca9-4723-a766-797a6bcbd243-kube-api-access-wwc7f\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.202763 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-scripts\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.202780 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-combined-ca-bundle\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.203762 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-fernet-keys\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.204430 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-config-data\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.211302 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-credential-keys\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.215647 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwc7f\" (UniqueName: \"kubernetes.io/projected/e430a1b7-6ca9-4723-a766-797a6bcbd243-kube-api-access-wwc7f\") pod \"keystone-bootstrap-4mkq4\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.266772 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.317024 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.721035 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f667c44c-9pmw8"] Jan 21 17:04:35 crc kubenswrapper[4890]: W0121 17:04:35.725255 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97c71702_fe9c_4370_8dd9_2b9a619e939b.slice/crio-c30364cd1abb4bce86801289f0f8793bf7e79298aa9724213b5a71d3fd0d82dc WatchSource:0}: Error finding container c30364cd1abb4bce86801289f0f8793bf7e79298aa9724213b5a71d3fd0d82dc: Status 404 returned error can't find the container with id c30364cd1abb4bce86801289f0f8793bf7e79298aa9724213b5a71d3fd0d82dc Jan 21 17:04:35 crc kubenswrapper[4890]: I0121 17:04:35.849029 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4mkq4"] Jan 21 17:04:35 crc kubenswrapper[4890]: W0121 17:04:35.851429 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode430a1b7_6ca9_4723_a766_797a6bcbd243.slice/crio-371dfef8e4e8476afcb2c55ac61af3b47031af251af21a104e58dd9ba9d20357 WatchSource:0}: Error finding container 371dfef8e4e8476afcb2c55ac61af3b47031af251af21a104e58dd9ba9d20357: Status 404 returned error can't find the container with id 371dfef8e4e8476afcb2c55ac61af3b47031af251af21a104e58dd9ba9d20357 Jan 21 17:04:36 crc kubenswrapper[4890]: I0121 17:04:36.360813 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4mkq4" event={"ID":"e430a1b7-6ca9-4723-a766-797a6bcbd243","Type":"ContainerStarted","Data":"4d3cfa570544df055cdfe5eb32ce1dfcaeda29dc4502eaa778137897415171db"} Jan 21 17:04:36 crc kubenswrapper[4890]: I0121 17:04:36.361187 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4mkq4" event={"ID":"e430a1b7-6ca9-4723-a766-797a6bcbd243","Type":"ContainerStarted","Data":"371dfef8e4e8476afcb2c55ac61af3b47031af251af21a104e58dd9ba9d20357"} Jan 21 17:04:36 crc kubenswrapper[4890]: I0121 17:04:36.362858 4890 generic.go:334] "Generic (PLEG): container finished" podID="97c71702-fe9c-4370-8dd9-2b9a619e939b" containerID="13a855302e9d71de92c9a568660596525cd20c33c2b93eafb97352e442480bf0" exitCode=0 Jan 21 17:04:36 crc kubenswrapper[4890]: I0121 17:04:36.362909 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" event={"ID":"97c71702-fe9c-4370-8dd9-2b9a619e939b","Type":"ContainerDied","Data":"13a855302e9d71de92c9a568660596525cd20c33c2b93eafb97352e442480bf0"} Jan 21 17:04:36 crc kubenswrapper[4890]: I0121 17:04:36.362944 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" event={"ID":"97c71702-fe9c-4370-8dd9-2b9a619e939b","Type":"ContainerStarted","Data":"c30364cd1abb4bce86801289f0f8793bf7e79298aa9724213b5a71d3fd0d82dc"} Jan 21 17:04:36 crc kubenswrapper[4890]: I0121 17:04:36.398108 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4mkq4" podStartSLOduration=2.398083283 podStartE2EDuration="2.398083283s" podCreationTimestamp="2026-01-21 17:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:36.393102929 +0000 UTC m=+5558.754545348" watchObservedRunningTime="2026-01-21 17:04:36.398083283 +0000 UTC m=+5558.759525702" Jan 21 17:04:37 crc kubenswrapper[4890]: I0121 17:04:37.374217 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" event={"ID":"97c71702-fe9c-4370-8dd9-2b9a619e939b","Type":"ContainerStarted","Data":"bb7f1a3528156a9bb90a06bfe56378d64093f335e1ff6d96f9f88e1aedb09c31"} Jan 21 17:04:37 crc kubenswrapper[4890]: I0121 17:04:37.402648 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" podStartSLOduration=3.402627128 podStartE2EDuration="3.402627128s" podCreationTimestamp="2026-01-21 17:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:37.394573637 +0000 UTC m=+5559.756016056" watchObservedRunningTime="2026-01-21 17:04:37.402627128 +0000 UTC m=+5559.764069547" Jan 21 17:04:38 crc kubenswrapper[4890]: I0121 17:04:38.382225 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:40 crc kubenswrapper[4890]: I0121 17:04:40.398405 4890 generic.go:334] "Generic (PLEG): container finished" podID="e430a1b7-6ca9-4723-a766-797a6bcbd243" containerID="4d3cfa570544df055cdfe5eb32ce1dfcaeda29dc4502eaa778137897415171db" exitCode=0 Jan 21 17:04:40 crc kubenswrapper[4890]: I0121 17:04:40.398460 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4mkq4" event={"ID":"e430a1b7-6ca9-4723-a766-797a6bcbd243","Type":"ContainerDied","Data":"4d3cfa570544df055cdfe5eb32ce1dfcaeda29dc4502eaa778137897415171db"} Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.777052 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.913819 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwc7f\" (UniqueName: \"kubernetes.io/projected/e430a1b7-6ca9-4723-a766-797a6bcbd243-kube-api-access-wwc7f\") pod \"e430a1b7-6ca9-4723-a766-797a6bcbd243\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.914436 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-config-data\") pod \"e430a1b7-6ca9-4723-a766-797a6bcbd243\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.914539 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-fernet-keys\") pod \"e430a1b7-6ca9-4723-a766-797a6bcbd243\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.915304 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-credential-keys\") pod \"e430a1b7-6ca9-4723-a766-797a6bcbd243\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.915376 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-scripts\") pod \"e430a1b7-6ca9-4723-a766-797a6bcbd243\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.915478 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-combined-ca-bundle\") pod \"e430a1b7-6ca9-4723-a766-797a6bcbd243\" (UID: \"e430a1b7-6ca9-4723-a766-797a6bcbd243\") " Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.921902 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e430a1b7-6ca9-4723-a766-797a6bcbd243" (UID: "e430a1b7-6ca9-4723-a766-797a6bcbd243"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.921917 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-scripts" (OuterVolumeSpecName: "scripts") pod "e430a1b7-6ca9-4723-a766-797a6bcbd243" (UID: "e430a1b7-6ca9-4723-a766-797a6bcbd243"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.922276 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e430a1b7-6ca9-4723-a766-797a6bcbd243" (UID: "e430a1b7-6ca9-4723-a766-797a6bcbd243"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.922831 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e430a1b7-6ca9-4723-a766-797a6bcbd243-kube-api-access-wwc7f" (OuterVolumeSpecName: "kube-api-access-wwc7f") pod "e430a1b7-6ca9-4723-a766-797a6bcbd243" (UID: "e430a1b7-6ca9-4723-a766-797a6bcbd243"). InnerVolumeSpecName "kube-api-access-wwc7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.940140 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e430a1b7-6ca9-4723-a766-797a6bcbd243" (UID: "e430a1b7-6ca9-4723-a766-797a6bcbd243"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:41 crc kubenswrapper[4890]: I0121 17:04:41.951064 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-config-data" (OuterVolumeSpecName: "config-data") pod "e430a1b7-6ca9-4723-a766-797a6bcbd243" (UID: "e430a1b7-6ca9-4723-a766-797a6bcbd243"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.017659 4890 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.017711 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.017724 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.017784 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwc7f\" (UniqueName: \"kubernetes.io/projected/e430a1b7-6ca9-4723-a766-797a6bcbd243-kube-api-access-wwc7f\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.017797 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.017807 4890 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e430a1b7-6ca9-4723-a766-797a6bcbd243-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.419881 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4mkq4" event={"ID":"e430a1b7-6ca9-4723-a766-797a6bcbd243","Type":"ContainerDied","Data":"371dfef8e4e8476afcb2c55ac61af3b47031af251af21a104e58dd9ba9d20357"} Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.419950 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4mkq4" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.419952 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="371dfef8e4e8476afcb2c55ac61af3b47031af251af21a104e58dd9ba9d20357" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.515433 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4mkq4"] Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.523517 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4mkq4"] Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.592108 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rwhlx"] Jan 21 17:04:42 crc kubenswrapper[4890]: E0121 17:04:42.592532 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e430a1b7-6ca9-4723-a766-797a6bcbd243" containerName="keystone-bootstrap" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.592586 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="e430a1b7-6ca9-4723-a766-797a6bcbd243" containerName="keystone-bootstrap" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.592937 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="e430a1b7-6ca9-4723-a766-797a6bcbd243" containerName="keystone-bootstrap" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.594004 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.599426 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.599485 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.599426 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kxwv" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.599655 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.599749 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.617791 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rwhlx"] Jan 21 17:04:42 crc kubenswrapper[4890]: E0121 17:04:42.635199 4890 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode430a1b7_6ca9_4723_a766_797a6bcbd243.slice\": RecentStats: unable to find data in memory cache]" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.729382 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-fernet-keys\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.729453 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-credential-keys\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.729504 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-scripts\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.729527 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjz8\" (UniqueName: \"kubernetes.io/projected/a437a071-861b-41f4-b78f-b2b5775d464f-kube-api-access-hvjz8\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.729625 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-config-data\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.729660 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-combined-ca-bundle\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.832121 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-scripts\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.832209 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjz8\" (UniqueName: \"kubernetes.io/projected/a437a071-861b-41f4-b78f-b2b5775d464f-kube-api-access-hvjz8\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.832373 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-config-data\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.832424 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-combined-ca-bundle\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.832607 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-fernet-keys\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.832671 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-credential-keys\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.835913 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-scripts\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.836378 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-config-data\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.837992 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-fernet-keys\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.839222 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-credential-keys\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.841567 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-combined-ca-bundle\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.848111 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjz8\" (UniqueName: \"kubernetes.io/projected/a437a071-861b-41f4-b78f-b2b5775d464f-kube-api-access-hvjz8\") pod \"keystone-bootstrap-rwhlx\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:42 crc kubenswrapper[4890]: I0121 17:04:42.918819 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:43 crc kubenswrapper[4890]: I0121 17:04:43.326968 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rwhlx"] Jan 21 17:04:43 crc kubenswrapper[4890]: I0121 17:04:43.427542 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwhlx" event={"ID":"a437a071-861b-41f4-b78f-b2b5775d464f","Type":"ContainerStarted","Data":"a01b0545197f76eb09af654ad81b0ea2270b2c565eddb4b2945c19853f21b22a"} Jan 21 17:04:43 crc kubenswrapper[4890]: I0121 17:04:43.924060 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e430a1b7-6ca9-4723-a766-797a6bcbd243" path="/var/lib/kubelet/pods/e430a1b7-6ca9-4723-a766-797a6bcbd243/volumes" Jan 21 17:04:44 crc kubenswrapper[4890]: I0121 17:04:44.441334 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwhlx" event={"ID":"a437a071-861b-41f4-b78f-b2b5775d464f","Type":"ContainerStarted","Data":"8c3c7cf59cd6d9ac36223ec798e5cc1e0da4af21ed1296b165ded5455aa55e36"} Jan 21 17:04:44 crc kubenswrapper[4890]: I0121 17:04:44.458807 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rwhlx" podStartSLOduration=2.4587844260000002 podStartE2EDuration="2.458784426s" podCreationTimestamp="2026-01-21 17:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:44.458659033 +0000 UTC m=+5566.820101452" watchObservedRunningTime="2026-01-21 17:04:44.458784426 +0000 UTC m=+5566.820226835" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.269600 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69f667c44c-9pmw8" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.338442 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-p999g"] Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.338732 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69db5595f9-p999g" podUID="abaf98b1-05e5-4669-904c-42de87a966f5" containerName="dnsmasq-dns" containerID="cri-o://92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227" gracePeriod=10 Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.814195 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.891247 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-sb\") pod \"abaf98b1-05e5-4669-904c-42de87a966f5\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.891416 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-config\") pod \"abaf98b1-05e5-4669-904c-42de87a966f5\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.891491 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-nb\") pod \"abaf98b1-05e5-4669-904c-42de87a966f5\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.891540 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lm7h\" (UniqueName: \"kubernetes.io/projected/abaf98b1-05e5-4669-904c-42de87a966f5-kube-api-access-9lm7h\") pod \"abaf98b1-05e5-4669-904c-42de87a966f5\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.891709 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-dns-svc\") pod \"abaf98b1-05e5-4669-904c-42de87a966f5\" (UID: \"abaf98b1-05e5-4669-904c-42de87a966f5\") " Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.897499 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abaf98b1-05e5-4669-904c-42de87a966f5-kube-api-access-9lm7h" (OuterVolumeSpecName: "kube-api-access-9lm7h") pod "abaf98b1-05e5-4669-904c-42de87a966f5" (UID: "abaf98b1-05e5-4669-904c-42de87a966f5"). InnerVolumeSpecName "kube-api-access-9lm7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.914652 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:04:45 crc kubenswrapper[4890]: E0121 17:04:45.914916 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.938515 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abaf98b1-05e5-4669-904c-42de87a966f5" (UID: "abaf98b1-05e5-4669-904c-42de87a966f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.941719 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abaf98b1-05e5-4669-904c-42de87a966f5" (UID: "abaf98b1-05e5-4669-904c-42de87a966f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.944627 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-config" (OuterVolumeSpecName: "config") pod "abaf98b1-05e5-4669-904c-42de87a966f5" (UID: "abaf98b1-05e5-4669-904c-42de87a966f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.947265 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abaf98b1-05e5-4669-904c-42de87a966f5" (UID: "abaf98b1-05e5-4669-904c-42de87a966f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.994163 4890 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.994197 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.994209 4890 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.994218 4890 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaf98b1-05e5-4669-904c-42de87a966f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:45 crc kubenswrapper[4890]: I0121 17:04:45.994234 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lm7h\" (UniqueName: \"kubernetes.io/projected/abaf98b1-05e5-4669-904c-42de87a966f5-kube-api-access-9lm7h\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.459295 4890 generic.go:334] "Generic (PLEG): container finished" podID="abaf98b1-05e5-4669-904c-42de87a966f5" containerID="92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227" exitCode=0 Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.459363 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-p999g" Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.459383 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-p999g" event={"ID":"abaf98b1-05e5-4669-904c-42de87a966f5","Type":"ContainerDied","Data":"92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227"} Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.459429 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-p999g" event={"ID":"abaf98b1-05e5-4669-904c-42de87a966f5","Type":"ContainerDied","Data":"f53b9b61c60c10bb6edb4e6f25f21ed3a530ba8cc9c264f7e594b98fac1f3681"} Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.459448 4890 scope.go:117] "RemoveContainer" containerID="92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227" Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.461119 4890 generic.go:334] "Generic (PLEG): container finished" podID="a437a071-861b-41f4-b78f-b2b5775d464f" containerID="8c3c7cf59cd6d9ac36223ec798e5cc1e0da4af21ed1296b165ded5455aa55e36" exitCode=0 Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.461141 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwhlx" event={"ID":"a437a071-861b-41f4-b78f-b2b5775d464f","Type":"ContainerDied","Data":"8c3c7cf59cd6d9ac36223ec798e5cc1e0da4af21ed1296b165ded5455aa55e36"} Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.512661 4890 scope.go:117] "RemoveContainer" containerID="df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47" Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.515718 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-p999g"] Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.523825 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-p999g"] Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.534177 4890 scope.go:117] "RemoveContainer" containerID="92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227" Jan 21 17:04:46 crc kubenswrapper[4890]: E0121 17:04:46.534588 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227\": container with ID starting with 92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227 not found: ID does not exist" containerID="92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227" Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.534639 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227"} err="failed to get container status \"92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227\": rpc error: code = NotFound desc = could not find container \"92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227\": container with ID starting with 92fe94102adc5d459c4f82625d461e88f57af1866c4da04818611c3cc8faf227 not found: ID does not exist" Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.534667 4890 scope.go:117] "RemoveContainer" containerID="df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47" Jan 21 17:04:46 crc kubenswrapper[4890]: E0121 17:04:46.535524 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47\": container with ID starting with df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47 not found: ID does not exist" containerID="df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47" Jan 21 17:04:46 crc kubenswrapper[4890]: I0121 17:04:46.535550 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47"} err="failed to get container status \"df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47\": rpc error: code = NotFound desc = could not find container \"df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47\": container with ID starting with df6ee9bbd9b9445d2ffd47141705c2ce67366f83eff4cb04e35dd3b879960b47 not found: ID does not exist" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.826697 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.920655 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-credential-keys\") pod \"a437a071-861b-41f4-b78f-b2b5775d464f\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.920714 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-fernet-keys\") pod \"a437a071-861b-41f4-b78f-b2b5775d464f\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.920743 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-config-data\") pod \"a437a071-861b-41f4-b78f-b2b5775d464f\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.920773 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-combined-ca-bundle\") pod \"a437a071-861b-41f4-b78f-b2b5775d464f\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.920833 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-scripts\") pod \"a437a071-861b-41f4-b78f-b2b5775d464f\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.920957 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvjz8\" (UniqueName: \"kubernetes.io/projected/a437a071-861b-41f4-b78f-b2b5775d464f-kube-api-access-hvjz8\") pod \"a437a071-861b-41f4-b78f-b2b5775d464f\" (UID: \"a437a071-861b-41f4-b78f-b2b5775d464f\") " Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.923993 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abaf98b1-05e5-4669-904c-42de87a966f5" path="/var/lib/kubelet/pods/abaf98b1-05e5-4669-904c-42de87a966f5/volumes" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.932510 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a437a071-861b-41f4-b78f-b2b5775d464f-kube-api-access-hvjz8" (OuterVolumeSpecName: "kube-api-access-hvjz8") pod "a437a071-861b-41f4-b78f-b2b5775d464f" (UID: "a437a071-861b-41f4-b78f-b2b5775d464f"). InnerVolumeSpecName "kube-api-access-hvjz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.933196 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a437a071-861b-41f4-b78f-b2b5775d464f" (UID: "a437a071-861b-41f4-b78f-b2b5775d464f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.934700 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-scripts" (OuterVolumeSpecName: "scripts") pod "a437a071-861b-41f4-b78f-b2b5775d464f" (UID: "a437a071-861b-41f4-b78f-b2b5775d464f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.939439 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a437a071-861b-41f4-b78f-b2b5775d464f" (UID: "a437a071-861b-41f4-b78f-b2b5775d464f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.942469 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-config-data" (OuterVolumeSpecName: "config-data") pod "a437a071-861b-41f4-b78f-b2b5775d464f" (UID: "a437a071-861b-41f4-b78f-b2b5775d464f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:47 crc kubenswrapper[4890]: I0121 17:04:47.957839 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a437a071-861b-41f4-b78f-b2b5775d464f" (UID: "a437a071-861b-41f4-b78f-b2b5775d464f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.022425 4890 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.022458 4890 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.022467 4890 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.022571 4890 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.022899 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvjz8\" (UniqueName: \"kubernetes.io/projected/a437a071-861b-41f4-b78f-b2b5775d464f-kube-api-access-hvjz8\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.022994 4890 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a437a071-861b-41f4-b78f-b2b5775d464f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.485051 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwhlx" event={"ID":"a437a071-861b-41f4-b78f-b2b5775d464f","Type":"ContainerDied","Data":"a01b0545197f76eb09af654ad81b0ea2270b2c565eddb4b2945c19853f21b22a"} Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.485102 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01b0545197f76eb09af654ad81b0ea2270b2c565eddb4b2945c19853f21b22a" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.485167 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwhlx" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.573386 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f9d658c87-4d7p9"] Jan 21 17:04:48 crc kubenswrapper[4890]: E0121 17:04:48.573761 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abaf98b1-05e5-4669-904c-42de87a966f5" containerName="init" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.573782 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="abaf98b1-05e5-4669-904c-42de87a966f5" containerName="init" Jan 21 17:04:48 crc kubenswrapper[4890]: E0121 17:04:48.573805 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a437a071-861b-41f4-b78f-b2b5775d464f" containerName="keystone-bootstrap" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.573813 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="a437a071-861b-41f4-b78f-b2b5775d464f" containerName="keystone-bootstrap" Jan 21 17:04:48 crc kubenswrapper[4890]: E0121 17:04:48.573826 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abaf98b1-05e5-4669-904c-42de87a966f5" containerName="dnsmasq-dns" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.573832 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="abaf98b1-05e5-4669-904c-42de87a966f5" containerName="dnsmasq-dns" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.573982 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="abaf98b1-05e5-4669-904c-42de87a966f5" containerName="dnsmasq-dns" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.573999 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="a437a071-861b-41f4-b78f-b2b5775d464f" containerName="keystone-bootstrap" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.574535 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.577913 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.578448 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.578878 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kxwv" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.578934 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.579464 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.579833 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.582385 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-domains" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.584557 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f9d658c87-4d7p9"] Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.632739 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-fernet-keys\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.632990 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"keystone-domains\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-keystone-domains\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.633148 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp8hf\" (UniqueName: \"kubernetes.io/projected/0d17306c-479e-41c5-89d8-0ea8d614e4b9-kube-api-access-cp8hf\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.633265 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-config-data\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.633406 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-internal-tls-certs\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.633531 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-credential-keys\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.633667 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-combined-ca-bundle\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.633813 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-scripts\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.633977 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-public-tls-certs\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735761 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-public-tls-certs\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735831 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-fernet-keys\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735850 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"keystone-domains\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-keystone-domains\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735872 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp8hf\" (UniqueName: \"kubernetes.io/projected/0d17306c-479e-41c5-89d8-0ea8d614e4b9-kube-api-access-cp8hf\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735899 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-config-data\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735935 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-internal-tls-certs\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735966 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-credential-keys\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.735986 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-combined-ca-bundle\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.736010 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-scripts\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.741078 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-internal-tls-certs\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.743742 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-fernet-keys\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.751754 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-public-tls-certs\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.752627 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-credential-keys\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.752743 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"keystone-domains\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-keystone-domains\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.755165 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp8hf\" (UniqueName: \"kubernetes.io/projected/0d17306c-479e-41c5-89d8-0ea8d614e4b9-kube-api-access-cp8hf\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.758669 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-combined-ca-bundle\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.759321 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-config-data\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.765671 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d17306c-479e-41c5-89d8-0ea8d614e4b9-scripts\") pod \"keystone-5f9d658c87-4d7p9\" (UID: \"0d17306c-479e-41c5-89d8-0ea8d614e4b9\") " pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:48 crc kubenswrapper[4890]: I0121 17:04:48.889908 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:49 crc kubenswrapper[4890]: I0121 17:04:49.300146 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f9d658c87-4d7p9"] Jan 21 17:04:49 crc kubenswrapper[4890]: I0121 17:04:49.493511 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f9d658c87-4d7p9" event={"ID":"0d17306c-479e-41c5-89d8-0ea8d614e4b9","Type":"ContainerStarted","Data":"a5e18667f6d003a31d0f7a063e478f92de26a89523e56a11f94ef19bffb0882e"} Jan 21 17:04:50 crc kubenswrapper[4890]: I0121 17:04:50.504237 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f9d658c87-4d7p9" event={"ID":"0d17306c-479e-41c5-89d8-0ea8d614e4b9","Type":"ContainerStarted","Data":"eace92297493c084ff6d2b87588f97d43394e47fe38113a998913417a3892b23"} Jan 21 17:04:50 crc kubenswrapper[4890]: I0121 17:04:50.505728 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:04:50 crc kubenswrapper[4890]: I0121 17:04:50.534447 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f9d658c87-4d7p9" podStartSLOduration=2.534424988 podStartE2EDuration="2.534424988s" podCreationTimestamp="2026-01-21 17:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:04:50.531029533 +0000 UTC m=+5572.892471942" watchObservedRunningTime="2026-01-21 17:04:50.534424988 +0000 UTC m=+5572.895867397" Jan 21 17:04:58 crc kubenswrapper[4890]: I0121 17:04:58.914671 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:04:59 crc kubenswrapper[4890]: I0121 17:04:59.596172 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"441e2804322bd206b49ddc5d039d873df556718ce8ad59e56fedf064eaf06c01"} Jan 21 17:05:20 crc kubenswrapper[4890]: I0121 17:05:20.428711 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f9d658c87-4d7p9" Jan 21 17:05:23 crc kubenswrapper[4890]: I0121 17:05:23.938328 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 17:05:23 crc kubenswrapper[4890]: I0121 17:05:23.940161 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:05:23 crc kubenswrapper[4890]: I0121 17:05:23.942374 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 17:05:23 crc kubenswrapper[4890]: I0121 17:05:23.944200 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4544z" Jan 21 17:05:23 crc kubenswrapper[4890]: I0121 17:05:23.948963 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 17:05:23 crc kubenswrapper[4890]: I0121 17:05:23.951700 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.037248 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.037302 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.037353 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-openstack-config\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.037553 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncpwr\" (UniqueName: \"kubernetes.io/projected/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-kube-api-access-ncpwr\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.139405 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncpwr\" (UniqueName: \"kubernetes.io/projected/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-kube-api-access-ncpwr\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.139466 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.139499 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.139528 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-openstack-config\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.140335 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-openstack-config\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.145929 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.150905 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.157019 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncpwr\" (UniqueName: \"kubernetes.io/projected/a857cb13-0413-4d0a-96e6-40ff7d6ab4a3-kube-api-access-ncpwr\") pod \"openstackclient\" (UID: \"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3\") " pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.260794 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.673644 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 17:05:24 crc kubenswrapper[4890]: W0121 17:05:24.676478 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda857cb13_0413_4d0a_96e6_40ff7d6ab4a3.slice/crio-804e73229de55eb4f1c43be55d1c67fef3fa3d0bb3028fe8bc370afe58fd08a2 WatchSource:0}: Error finding container 804e73229de55eb4f1c43be55d1c67fef3fa3d0bb3028fe8bc370afe58fd08a2: Status 404 returned error can't find the container with id 804e73229de55eb4f1c43be55d1c67fef3fa3d0bb3028fe8bc370afe58fd08a2 Jan 21 17:05:24 crc kubenswrapper[4890]: I0121 17:05:24.785177 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3","Type":"ContainerStarted","Data":"804e73229de55eb4f1c43be55d1c67fef3fa3d0bb3028fe8bc370afe58fd08a2"} Jan 21 17:05:25 crc kubenswrapper[4890]: I0121 17:05:25.795038 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a857cb13-0413-4d0a-96e6-40ff7d6ab4a3","Type":"ContainerStarted","Data":"bfc21033b7c573ab32e86be24eef88c19aa04ffe2ee32479e15a24eeb7877391"} Jan 21 17:05:25 crc kubenswrapper[4890]: I0121 17:05:25.809462 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.809443603 podStartE2EDuration="2.809443603s" podCreationTimestamp="2026-01-21 17:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:05:25.808794117 +0000 UTC m=+5608.170236526" watchObservedRunningTime="2026-01-21 17:05:25.809443603 +0000 UTC m=+5608.170886012" Jan 21 17:06:41 crc kubenswrapper[4890]: E0121 17:06:41.210369 4890 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.2:38590->38.129.56.2:38991: read tcp 38.129.56.2:38590->38.129.56.2:38991: read: connection reset by peer Jan 21 17:07:18 crc kubenswrapper[4890]: I0121 17:07:18.792829 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:07:18 crc kubenswrapper[4890]: I0121 17:07:18.793536 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:07:48 crc kubenswrapper[4890]: I0121 17:07:48.762208 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:07:48 crc kubenswrapper[4890]: I0121 17:07:48.763187 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:07:52 crc kubenswrapper[4890]: I0121 17:07:52.056014 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ctxvm"] Jan 21 17:07:52 crc kubenswrapper[4890]: I0121 17:07:52.067268 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ctxvm"] Jan 21 17:07:53 crc kubenswrapper[4890]: I0121 17:07:53.927464 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8bafda-f7fb-4ec8-890b-6558c7685156" path="/var/lib/kubelet/pods/4b8bafda-f7fb-4ec8-890b-6558c7685156/volumes" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.579414 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qcqzq/must-gather-grqs4"] Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.581318 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.585969 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qcqzq"/"openshift-service-ca.crt" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.586605 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qcqzq"/"kube-root-ca.crt" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.586605 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qcqzq"/"default-dockercfg-vp5xg" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.595210 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qcqzq/must-gather-grqs4"] Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.712972 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbnx\" (UniqueName: \"kubernetes.io/projected/98162317-c717-433e-b658-258cfb11a204-kube-api-access-snbnx\") pod \"must-gather-grqs4\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.713092 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98162317-c717-433e-b658-258cfb11a204-must-gather-output\") pod \"must-gather-grqs4\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.815031 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbnx\" (UniqueName: \"kubernetes.io/projected/98162317-c717-433e-b658-258cfb11a204-kube-api-access-snbnx\") pod \"must-gather-grqs4\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.815884 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98162317-c717-433e-b658-258cfb11a204-must-gather-output\") pod \"must-gather-grqs4\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.816501 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98162317-c717-433e-b658-258cfb11a204-must-gather-output\") pod \"must-gather-grqs4\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.849942 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbnx\" (UniqueName: \"kubernetes.io/projected/98162317-c717-433e-b658-258cfb11a204-kube-api-access-snbnx\") pod \"must-gather-grqs4\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:09 crc kubenswrapper[4890]: I0121 17:08:09.898631 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:08:10 crc kubenswrapper[4890]: I0121 17:08:10.394636 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qcqzq/must-gather-grqs4"] Jan 21 17:08:11 crc kubenswrapper[4890]: I0121 17:08:11.045150 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/must-gather-grqs4" event={"ID":"98162317-c717-433e-b658-258cfb11a204","Type":"ContainerStarted","Data":"1c66267aa28e0a9ff626ea6af0e4d5c469426f4e8b32fdd833044f5f2f5bfa63"} Jan 21 17:08:18 crc kubenswrapper[4890]: I0121 17:08:18.761787 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:08:18 crc kubenswrapper[4890]: I0121 17:08:18.762376 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:08:18 crc kubenswrapper[4890]: I0121 17:08:18.762430 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 17:08:18 crc kubenswrapper[4890]: I0121 17:08:18.763060 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"441e2804322bd206b49ddc5d039d873df556718ce8ad59e56fedf064eaf06c01"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:08:18 crc kubenswrapper[4890]: I0121 17:08:18.763115 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://441e2804322bd206b49ddc5d039d873df556718ce8ad59e56fedf064eaf06c01" gracePeriod=600 Jan 21 17:08:18 crc kubenswrapper[4890]: I0121 17:08:18.952638 4890 scope.go:117] "RemoveContainer" containerID="f738be80314898b6478801a14f07ed4ba738804a92d428847390f33970597cb5" Jan 21 17:08:19 crc kubenswrapper[4890]: I0121 17:08:19.133983 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="441e2804322bd206b49ddc5d039d873df556718ce8ad59e56fedf064eaf06c01" exitCode=0 Jan 21 17:08:19 crc kubenswrapper[4890]: I0121 17:08:19.134046 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"441e2804322bd206b49ddc5d039d873df556718ce8ad59e56fedf064eaf06c01"} Jan 21 17:08:19 crc kubenswrapper[4890]: I0121 17:08:19.134330 4890 scope.go:117] "RemoveContainer" containerID="5be20665a40586fc5581ea8c7a4c6c340064d7e9a9c66381fa7d35f6aa4d5443" Jan 21 17:08:20 crc kubenswrapper[4890]: I0121 17:08:20.144479 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537"} Jan 21 17:08:20 crc kubenswrapper[4890]: I0121 17:08:20.146122 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/must-gather-grqs4" event={"ID":"98162317-c717-433e-b658-258cfb11a204","Type":"ContainerStarted","Data":"b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654"} Jan 21 17:08:20 crc kubenswrapper[4890]: I0121 17:08:20.146183 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/must-gather-grqs4" event={"ID":"98162317-c717-433e-b658-258cfb11a204","Type":"ContainerStarted","Data":"b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e"} Jan 21 17:08:20 crc kubenswrapper[4890]: I0121 17:08:20.201072 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qcqzq/must-gather-grqs4" podStartSLOduration=2.62384451 podStartE2EDuration="11.201050409s" podCreationTimestamp="2026-01-21 17:08:09 +0000 UTC" firstStartedPulling="2026-01-21 17:08:10.41133075 +0000 UTC m=+5772.772773159" lastFinishedPulling="2026-01-21 17:08:18.988536649 +0000 UTC m=+5781.349979058" observedRunningTime="2026-01-21 17:08:20.195760498 +0000 UTC m=+5782.557202907" watchObservedRunningTime="2026-01-21 17:08:20.201050409 +0000 UTC m=+5782.562492818" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.169230 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qcqzq/crc-debug-t54kf"] Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.170637 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.266662 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-host\") pod \"crc-debug-t54kf\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.266754 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktx8m\" (UniqueName: \"kubernetes.io/projected/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-kube-api-access-ktx8m\") pod \"crc-debug-t54kf\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.368710 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-host\") pod \"crc-debug-t54kf\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.368824 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktx8m\" (UniqueName: \"kubernetes.io/projected/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-kube-api-access-ktx8m\") pod \"crc-debug-t54kf\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.368886 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-host\") pod \"crc-debug-t54kf\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.396587 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktx8m\" (UniqueName: \"kubernetes.io/projected/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-kube-api-access-ktx8m\") pod \"crc-debug-t54kf\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: I0121 17:08:22.489404 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:08:22 crc kubenswrapper[4890]: W0121 17:08:22.524538 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fbdb918_d22d_45ce_aba1_b39a30d6e81d.slice/crio-aea00cf2c4fc3dae9f528539645da312131606f200ce8fc7aeaa2db56cd2ac8b WatchSource:0}: Error finding container aea00cf2c4fc3dae9f528539645da312131606f200ce8fc7aeaa2db56cd2ac8b: Status 404 returned error can't find the container with id aea00cf2c4fc3dae9f528539645da312131606f200ce8fc7aeaa2db56cd2ac8b Jan 21 17:08:23 crc kubenswrapper[4890]: I0121 17:08:23.168928 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" event={"ID":"9fbdb918-d22d-45ce-aba1-b39a30d6e81d","Type":"ContainerStarted","Data":"aea00cf2c4fc3dae9f528539645da312131606f200ce8fc7aeaa2db56cd2ac8b"} Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.676578 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69f667c44c-9pmw8_97c71702-fe9c-4370-8dd9-2b9a619e939b/dnsmasq-dns/0.log" Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.689929 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69f667c44c-9pmw8_97c71702-fe9c-4370-8dd9-2b9a619e939b/init/0.log" Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.703289 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-0a87-account-create-update-hv2jn_01496eb9-e1e6-45fc-872e-63b8be1baec4/mariadb-account-create-update/0.log" Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.764497 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f9d658c87-4d7p9_0d17306c-479e-41c5-89d8-0ea8d614e4b9/keystone-api/0.log" Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.785847 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-rwhlx_a437a071-861b-41f4-b78f-b2b5775d464f/keystone-bootstrap/0.log" Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.803149 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-d7hbm_10501a22-53ef-4e70-9746-bea547a51fac/mariadb-database-create/0.log" Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.823901 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-7m788_ac707741-3e2a-4e3b-91d9-4506d55b585f/keystone-db-sync/0.log" Jan 21 17:08:24 crc kubenswrapper[4890]: I0121 17:08:24.840604 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_58287e81-1deb-4dbc-a395-087a84f0830b/adoption/0.log" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.618612 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mxg9h"] Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.620514 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.642451 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxg9h"] Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.735553 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-catalog-content\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.735640 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-utilities\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.735691 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jbjg\" (UniqueName: \"kubernetes.io/projected/f38178ea-1060-4dbc-b873-eba7afae165d-kube-api-access-4jbjg\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.837053 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-utilities\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.837130 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jbjg\" (UniqueName: \"kubernetes.io/projected/f38178ea-1060-4dbc-b873-eba7afae165d-kube-api-access-4jbjg\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.837197 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-catalog-content\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.837697 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-catalog-content\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.837917 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-utilities\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.865208 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jbjg\" (UniqueName: \"kubernetes.io/projected/f38178ea-1060-4dbc-b873-eba7afae165d-kube-api-access-4jbjg\") pod \"redhat-operators-mxg9h\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.963013 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:25 crc kubenswrapper[4890]: I0121 17:08:25.994988 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f51f8509-8290-4173-ac62-16755f39ec90/memcached/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.017735 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b4ec1333-3f37-4646-b941-a14dc29c7b34/galera/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.037624 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b4ec1333-3f37-4646-b941-a14dc29c7b34/mysql-bootstrap/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.075617 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bb6d3f3b-c2f7-4793-8a37-b892f720145c/galera/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.113604 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bb6d3f3b-c2f7-4793-8a37-b892f720145c/mysql-bootstrap/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.142485 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a857cb13-0413-4d0a-96e6-40ff7d6ab4a3/openstackclient/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.170457 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_38309d11-5d19-4fcf-b65e-ba1c6acb4f69/adoption/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.195867 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3bc77d57-1207-4646-a8cc-5855c7f15f91/ovn-northd/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.225750 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3bc77d57-1207-4646-a8cc-5855c7f15f91/openstack-network-exporter/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.269152 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_703438eb-576e-4abc-b9ca-3ed7db28e8a2/ovsdbserver-nb/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.286848 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_703438eb-576e-4abc-b9ca-3ed7db28e8a2/openstack-network-exporter/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.311772 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8b5542db-440c-41a8-855f-4046ecda9ec8/ovsdbserver-nb/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.328393 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8b5542db-440c-41a8-855f-4046ecda9ec8/openstack-network-exporter/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.358157 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187/ovsdbserver-nb/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.368598 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_b3b3b8cf-c2a7-4fea-bbc7-9127b06f2187/openstack-network-exporter/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.411771 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1321d6ed-786c-45ea-bacc-14ba6afa47e5/ovsdbserver-sb/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.426255 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1321d6ed-786c-45ea-bacc-14ba6afa47e5/openstack-network-exporter/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.464500 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_a6329e18-cfd2-46bf-862b-ba11f05e02fd/ovsdbserver-sb/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.471217 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_a6329e18-cfd2-46bf-862b-ba11f05e02fd/openstack-network-exporter/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.500001 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_da8d3913-19b9-4fdc-ad72-8866d73f5139/ovsdbserver-sb/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.516762 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_da8d3913-19b9-4fdc-ad72-8866d73f5139/openstack-network-exporter/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.521987 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxg9h"] Jan 21 17:08:26 crc kubenswrapper[4890]: W0121 17:08:26.526609 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf38178ea_1060_4dbc_b873_eba7afae165d.slice/crio-49c70352d67aa1740655311e3c64101966611f1f98b9dd64e0095a3b71584ef6 WatchSource:0}: Error finding container 49c70352d67aa1740655311e3c64101966611f1f98b9dd64e0095a3b71584ef6: Status 404 returned error can't find the container with id 49c70352d67aa1740655311e3c64101966611f1f98b9dd64e0095a3b71584ef6 Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.561102 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_53f07946-ba37-4267-af0a-6071177f2a6d/rabbitmq/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.567595 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_53f07946-ba37-4267-af0a-6071177f2a6d/setup-container/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.597185 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb70f116-e2f7-4501-87fa-d519a1d6d3f9/rabbitmq/0.log" Jan 21 17:08:26 crc kubenswrapper[4890]: I0121 17:08:26.614267 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb70f116-e2f7-4501-87fa-d519a1d6d3f9/setup-container/0.log" Jan 21 17:08:27 crc kubenswrapper[4890]: I0121 17:08:27.218608 4890 generic.go:334] "Generic (PLEG): container finished" podID="f38178ea-1060-4dbc-b873-eba7afae165d" containerID="d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642" exitCode=0 Jan 21 17:08:27 crc kubenswrapper[4890]: I0121 17:08:27.218900 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxg9h" event={"ID":"f38178ea-1060-4dbc-b873-eba7afae165d","Type":"ContainerDied","Data":"d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642"} Jan 21 17:08:27 crc kubenswrapper[4890]: I0121 17:08:27.218959 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxg9h" event={"ID":"f38178ea-1060-4dbc-b873-eba7afae165d","Type":"ContainerStarted","Data":"49c70352d67aa1740655311e3c64101966611f1f98b9dd64e0095a3b71584ef6"} Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.240264 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxg9h" event={"ID":"f38178ea-1060-4dbc-b873-eba7afae165d","Type":"ContainerStarted","Data":"580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce"} Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.597006 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcrln"] Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.598912 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.619468 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcrln"] Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.692498 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9s6\" (UniqueName: \"kubernetes.io/projected/1c2cb648-2a9b-444b-aec4-d88dde8f951b-kube-api-access-zj9s6\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.692550 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-catalog-content\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.692603 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-utilities\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.794471 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-utilities\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.794954 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9s6\" (UniqueName: \"kubernetes.io/projected/1c2cb648-2a9b-444b-aec4-d88dde8f951b-kube-api-access-zj9s6\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.794994 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-catalog-content\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.795186 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-utilities\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.795510 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-catalog-content\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.830533 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9s6\" (UniqueName: \"kubernetes.io/projected/1c2cb648-2a9b-444b-aec4-d88dde8f951b-kube-api-access-zj9s6\") pod \"redhat-marketplace-lcrln\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:28 crc kubenswrapper[4890]: I0121 17:08:28.919458 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:29 crc kubenswrapper[4890]: I0121 17:08:29.324028 4890 generic.go:334] "Generic (PLEG): container finished" podID="f38178ea-1060-4dbc-b873-eba7afae165d" containerID="580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce" exitCode=0 Jan 21 17:08:29 crc kubenswrapper[4890]: I0121 17:08:29.324273 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxg9h" event={"ID":"f38178ea-1060-4dbc-b873-eba7afae165d","Type":"ContainerDied","Data":"580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce"} Jan 21 17:08:29 crc kubenswrapper[4890]: I0121 17:08:29.701215 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcrln"] Jan 21 17:08:29 crc kubenswrapper[4890]: W0121 17:08:29.712102 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c2cb648_2a9b_444b_aec4_d88dde8f951b.slice/crio-a17a61cbcf9fc398881b5b6728d53834e64963e69b2499f67cebc5f0b0be6ad4 WatchSource:0}: Error finding container a17a61cbcf9fc398881b5b6728d53834e64963e69b2499f67cebc5f0b0be6ad4: Status 404 returned error can't find the container with id a17a61cbcf9fc398881b5b6728d53834e64963e69b2499f67cebc5f0b0be6ad4 Jan 21 17:08:30 crc kubenswrapper[4890]: I0121 17:08:30.336229 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxg9h" event={"ID":"f38178ea-1060-4dbc-b873-eba7afae165d","Type":"ContainerStarted","Data":"900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8"} Jan 21 17:08:30 crc kubenswrapper[4890]: I0121 17:08:30.338891 4890 generic.go:334] "Generic (PLEG): container finished" podID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerID="dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc" exitCode=0 Jan 21 17:08:30 crc kubenswrapper[4890]: I0121 17:08:30.338939 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcrln" event={"ID":"1c2cb648-2a9b-444b-aec4-d88dde8f951b","Type":"ContainerDied","Data":"dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc"} Jan 21 17:08:30 crc kubenswrapper[4890]: I0121 17:08:30.338989 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcrln" event={"ID":"1c2cb648-2a9b-444b-aec4-d88dde8f951b","Type":"ContainerStarted","Data":"a17a61cbcf9fc398881b5b6728d53834e64963e69b2499f67cebc5f0b0be6ad4"} Jan 21 17:08:30 crc kubenswrapper[4890]: I0121 17:08:30.360093 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mxg9h" podStartSLOduration=2.713638208 podStartE2EDuration="5.360077443s" podCreationTimestamp="2026-01-21 17:08:25 +0000 UTC" firstStartedPulling="2026-01-21 17:08:27.220849333 +0000 UTC m=+5789.582291742" lastFinishedPulling="2026-01-21 17:08:29.867288568 +0000 UTC m=+5792.228730977" observedRunningTime="2026-01-21 17:08:30.359131199 +0000 UTC m=+5792.720573608" watchObservedRunningTime="2026-01-21 17:08:30.360077443 +0000 UTC m=+5792.721519852" Jan 21 17:08:35 crc kubenswrapper[4890]: I0121 17:08:35.963926 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:35 crc kubenswrapper[4890]: I0121 17:08:35.964581 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:35 crc kubenswrapper[4890]: I0121 17:08:35.973275 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/extract/0.log" Jan 21 17:08:35 crc kubenswrapper[4890]: I0121 17:08:35.996527 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/util/0.log" Jan 21 17:08:36 crc kubenswrapper[4890]: I0121 17:08:36.013229 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/pull/0.log" Jan 21 17:08:36 crc kubenswrapper[4890]: I0121 17:08:36.110413 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-f92ld_2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538/manager/0.log" Jan 21 17:08:36 crc kubenswrapper[4890]: I0121 17:08:36.364220 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-5c7zf_0f4bb54d-23a1-4b41-995f-d7affd9cd504/manager/0.log" Jan 21 17:08:36 crc kubenswrapper[4890]: I0121 17:08:36.393024 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-l7ndf_4319998f-d413-4412-bffd-7123d46bce19/manager/0.log" Jan 21 17:08:36 crc kubenswrapper[4890]: I0121 17:08:36.724336 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h6r95_44fcf69f-7131-43c4-9303-f5636c294644/manager/0.log" Jan 21 17:08:36 crc kubenswrapper[4890]: I0121 17:08:36.736144 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-v7zt4_8791802a-0f5d-4d66-a19b-bf2b373ddd56/manager/0.log" Jan 21 17:08:36 crc kubenswrapper[4890]: I0121 17:08:36.746826 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pqzjj_91df512f-6657-44f1-b643-c18778e5d159/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.025695 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mxg9h" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="registry-server" probeResult="failure" output=< Jan 21 17:08:37 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 17:08:37 crc kubenswrapper[4890]: > Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.145312 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-bbwtr_6dd75ffe-4c90-493d-b5af-313056532562/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.158918 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-xvfb4_41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.251829 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-rklqr_d1fd4cb9-f562-48b3-b829-55f48fc8a414/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.266648 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-jcfgv_ab7d4301-6caa-4a1e-a634-e4a355271b68/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.325950 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-nfvzq_88bf9325-183a-4b37-8278-fdf6a95edf3c/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.380237 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-wmm44_cb112e2e-1c3b-4701-87ec-dee15131d2a9/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.477567 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-4lhwm_eeba017a-ca09-444f-a4c6-895ec31b914b/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.489052 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nwlnf_4950c09f-4cbd-49e4-906f-e4451c610111/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.510431 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dzvgss_daa2bbb5-55a8-4920-9109-45bcd643bd9f/manager/0.log" Jan 21 17:08:37 crc kubenswrapper[4890]: I0121 17:08:37.650986 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-x45bd_e99a0335-92f6-4871-983a-b61c4d78256e/operator/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.087016 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-tqb8d_41b6e8d7-5b8e-4953-bb8c-af061e0fda60/manager/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.178842 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4dn5g_2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9/registry-server/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.505844 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9nwtw_f176bee8-4c10-4d65-bd9c-5e95bdc707c6/manager/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.620209 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-8qwg4_59304f72-3a8b-460b-989d-706b9e898d76/manager/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.647881 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8rlhm_cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f/operator/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.731124 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-6h8pt_acf11348-ddc8-494c-8ecc-ad1f5f44366f/manager/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.963143 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-sqqwl_105a410d-5ae8-44ad-9e48-e7cd00ae3c27/manager/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.978084 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cckzf_03a78be9-bd85-449f-93b6-1379195280c0/manager/0.log" Jan 21 17:08:39 crc kubenswrapper[4890]: I0121 17:08:39.993610 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-x2dl5_435e0e5d-88a8-4737-aa7d-cefffc292c23/manager/0.log" Jan 21 17:08:41 crc kubenswrapper[4890]: I0121 17:08:41.440616 4890 generic.go:334] "Generic (PLEG): container finished" podID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerID="959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b" exitCode=0 Jan 21 17:08:41 crc kubenswrapper[4890]: I0121 17:08:41.440732 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcrln" event={"ID":"1c2cb648-2a9b-444b-aec4-d88dde8f951b","Type":"ContainerDied","Data":"959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b"} Jan 21 17:08:41 crc kubenswrapper[4890]: I0121 17:08:41.442482 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" event={"ID":"9fbdb918-d22d-45ce-aba1-b39a30d6e81d","Type":"ContainerStarted","Data":"75f4fd6885bfeef646601242b4a5c6ce2476d3e8dda2fccab05d78a0b71bc154"} Jan 21 17:08:41 crc kubenswrapper[4890]: I0121 17:08:41.479071 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" podStartSLOduration=1.136667541 podStartE2EDuration="19.479051832s" podCreationTimestamp="2026-01-21 17:08:22 +0000 UTC" firstStartedPulling="2026-01-21 17:08:22.526867253 +0000 UTC m=+5784.888309662" lastFinishedPulling="2026-01-21 17:08:40.869251544 +0000 UTC m=+5803.230693953" observedRunningTime="2026-01-21 17:08:41.473407471 +0000 UTC m=+5803.834849890" watchObservedRunningTime="2026-01-21 17:08:41.479051832 +0000 UTC m=+5803.840494251" Jan 21 17:08:42 crc kubenswrapper[4890]: I0121 17:08:42.452328 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcrln" event={"ID":"1c2cb648-2a9b-444b-aec4-d88dde8f951b","Type":"ContainerStarted","Data":"3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24"} Jan 21 17:08:42 crc kubenswrapper[4890]: I0121 17:08:42.469210 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcrln" podStartSLOduration=10.895941534 podStartE2EDuration="14.469193418s" podCreationTimestamp="2026-01-21 17:08:28 +0000 UTC" firstStartedPulling="2026-01-21 17:08:38.339980195 +0000 UTC m=+5800.701422604" lastFinishedPulling="2026-01-21 17:08:41.913232089 +0000 UTC m=+5804.274674488" observedRunningTime="2026-01-21 17:08:42.467542017 +0000 UTC m=+5804.828984426" watchObservedRunningTime="2026-01-21 17:08:42.469193418 +0000 UTC m=+5804.830635827" Jan 21 17:08:43 crc kubenswrapper[4890]: I0121 17:08:43.710184 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fsz8b_8897f3cf-e9a2-40ce-9353-018d197f47b1/control-plane-machine-set-operator/0.log" Jan 21 17:08:43 crc kubenswrapper[4890]: I0121 17:08:43.734130 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vd9wg_513d9ec4-2b91-4609-ba1a-0e6f0b551d1a/kube-rbac-proxy/0.log" Jan 21 17:08:43 crc kubenswrapper[4890]: I0121 17:08:43.755113 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vd9wg_513d9ec4-2b91-4609-ba1a-0e6f0b551d1a/machine-api-operator/0.log" Jan 21 17:08:46 crc kubenswrapper[4890]: I0121 17:08:46.014620 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:46 crc kubenswrapper[4890]: I0121 17:08:46.061970 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:46 crc kubenswrapper[4890]: I0121 17:08:46.267895 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxg9h"] Jan 21 17:08:46 crc kubenswrapper[4890]: I0121 17:08:46.561760 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z6xnk_cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9/controller/0.log" Jan 21 17:08:46 crc kubenswrapper[4890]: I0121 17:08:46.581205 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z6xnk_cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9/kube-rbac-proxy/0.log" Jan 21 17:08:46 crc kubenswrapper[4890]: I0121 17:08:46.618013 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/controller/0.log" Jan 21 17:08:47 crc kubenswrapper[4890]: I0121 17:08:47.485276 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mxg9h" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="registry-server" containerID="cri-o://900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8" gracePeriod=2 Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.291879 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.303637 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/frr/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.312971 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/reloader/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.341481 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/frr-metrics/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.348161 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/kube-rbac-proxy/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.355109 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/kube-rbac-proxy-frr/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.361782 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-frr-files/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.368126 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-reloader/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.374866 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-metrics/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.390926 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dqvjk_7c1ba220-b8cb-40b8-ae09-e94423a126a5/frr-k8s-webhook-server/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.408913 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-catalog-content\") pod \"f38178ea-1060-4dbc-b873-eba7afae165d\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.409034 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jbjg\" (UniqueName: \"kubernetes.io/projected/f38178ea-1060-4dbc-b873-eba7afae165d-kube-api-access-4jbjg\") pod \"f38178ea-1060-4dbc-b873-eba7afae165d\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.409130 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-utilities\") pod \"f38178ea-1060-4dbc-b873-eba7afae165d\" (UID: \"f38178ea-1060-4dbc-b873-eba7afae165d\") " Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.411124 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-utilities" (OuterVolumeSpecName: "utilities") pod "f38178ea-1060-4dbc-b873-eba7afae165d" (UID: "f38178ea-1060-4dbc-b873-eba7afae165d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.416394 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38178ea-1060-4dbc-b873-eba7afae165d-kube-api-access-4jbjg" (OuterVolumeSpecName: "kube-api-access-4jbjg") pod "f38178ea-1060-4dbc-b873-eba7afae165d" (UID: "f38178ea-1060-4dbc-b873-eba7afae165d"). InnerVolumeSpecName "kube-api-access-4jbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.418065 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-66c89b74b6-f8w84_2a07a757-38cf-476f-acf2-758e953dc057/manager/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.431860 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c9dbfd5c5-cnkj6_9751dff3-fe38-4186-9f8c-e8aa4a08cf6d/webhook-server/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.506697 4890 generic.go:334] "Generic (PLEG): container finished" podID="f38178ea-1060-4dbc-b873-eba7afae165d" containerID="900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8" exitCode=0 Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.506737 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxg9h" event={"ID":"f38178ea-1060-4dbc-b873-eba7afae165d","Type":"ContainerDied","Data":"900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8"} Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.506772 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxg9h" event={"ID":"f38178ea-1060-4dbc-b873-eba7afae165d","Type":"ContainerDied","Data":"49c70352d67aa1740655311e3c64101966611f1f98b9dd64e0095a3b71584ef6"} Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.506825 4890 scope.go:117] "RemoveContainer" containerID="900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.506968 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxg9h" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.511459 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.511483 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jbjg\" (UniqueName: \"kubernetes.io/projected/f38178ea-1060-4dbc-b873-eba7afae165d-kube-api-access-4jbjg\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.535643 4890 scope.go:117] "RemoveContainer" containerID="580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.551825 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f38178ea-1060-4dbc-b873-eba7afae165d" (UID: "f38178ea-1060-4dbc-b873-eba7afae165d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.567724 4890 scope.go:117] "RemoveContainer" containerID="d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.593600 4890 scope.go:117] "RemoveContainer" containerID="900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8" Jan 21 17:08:48 crc kubenswrapper[4890]: E0121 17:08:48.595912 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8\": container with ID starting with 900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8 not found: ID does not exist" containerID="900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.596037 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8"} err="failed to get container status \"900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8\": rpc error: code = NotFound desc = could not find container \"900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8\": container with ID starting with 900b23453cb2e067f2841b0d01c0616c6d30bee91480eaaedc5b884b9d2ec2e8 not found: ID does not exist" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.596129 4890 scope.go:117] "RemoveContainer" containerID="580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce" Jan 21 17:08:48 crc kubenswrapper[4890]: E0121 17:08:48.596808 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce\": container with ID starting with 580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce not found: ID does not exist" containerID="580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.596856 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce"} err="failed to get container status \"580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce\": rpc error: code = NotFound desc = could not find container \"580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce\": container with ID starting with 580e867b5aac8b3149c651ba2f57889151f55746fa0c1174497251a452a135ce not found: ID does not exist" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.596881 4890 scope.go:117] "RemoveContainer" containerID="d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642" Jan 21 17:08:48 crc kubenswrapper[4890]: E0121 17:08:48.598370 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642\": container with ID starting with d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642 not found: ID does not exist" containerID="d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.598486 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642"} err="failed to get container status \"d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642\": rpc error: code = NotFound desc = could not find container \"d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642\": container with ID starting with d456594a78647d9e6dbf33a80c5c64efea582f0fe1acaf5e9d82b85742acb642 not found: ID does not exist" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.612494 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38178ea-1060-4dbc-b873-eba7afae165d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.816924 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f25hv_de8bead9-3c0f-4ba7-908b-c8c21763eb7c/speaker/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.823844 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f25hv_de8bead9-3c0f-4ba7-908b-c8c21763eb7c/kube-rbac-proxy/0.log" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.841469 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxg9h"] Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.873603 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mxg9h"] Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.920461 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.920508 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:48 crc kubenswrapper[4890]: I0121 17:08:48.972995 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:49 crc kubenswrapper[4890]: I0121 17:08:49.565201 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:49 crc kubenswrapper[4890]: I0121 17:08:49.924138 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" path="/var/lib/kubelet/pods/f38178ea-1060-4dbc-b873-eba7afae165d/volumes" Jan 21 17:08:51 crc kubenswrapper[4890]: I0121 17:08:51.268138 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcrln"] Jan 21 17:08:51 crc kubenswrapper[4890]: I0121 17:08:51.529955 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lcrln" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="registry-server" containerID="cri-o://3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24" gracePeriod=2 Jan 21 17:08:51 crc kubenswrapper[4890]: I0121 17:08:51.984068 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.065041 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-catalog-content\") pod \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.065153 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-utilities\") pod \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.065214 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9s6\" (UniqueName: \"kubernetes.io/projected/1c2cb648-2a9b-444b-aec4-d88dde8f951b-kube-api-access-zj9s6\") pod \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\" (UID: \"1c2cb648-2a9b-444b-aec4-d88dde8f951b\") " Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.066269 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-utilities" (OuterVolumeSpecName: "utilities") pod "1c2cb648-2a9b-444b-aec4-d88dde8f951b" (UID: "1c2cb648-2a9b-444b-aec4-d88dde8f951b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.066493 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.071401 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2cb648-2a9b-444b-aec4-d88dde8f951b-kube-api-access-zj9s6" (OuterVolumeSpecName: "kube-api-access-zj9s6") pod "1c2cb648-2a9b-444b-aec4-d88dde8f951b" (UID: "1c2cb648-2a9b-444b-aec4-d88dde8f951b"). InnerVolumeSpecName "kube-api-access-zj9s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.108301 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c2cb648-2a9b-444b-aec4-d88dde8f951b" (UID: "1c2cb648-2a9b-444b-aec4-d88dde8f951b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.167070 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj9s6\" (UniqueName: \"kubernetes.io/projected/1c2cb648-2a9b-444b-aec4-d88dde8f951b-kube-api-access-zj9s6\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:52 crc kubenswrapper[4890]: I0121 17:08:52.167102 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2cb648-2a9b-444b-aec4-d88dde8f951b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:53 crc kubenswrapper[4890]: I0121 17:08:53.310512 4890 generic.go:334] "Generic (PLEG): container finished" podID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerID="3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24" exitCode=0 Jan 21 17:08:53 crc kubenswrapper[4890]: I0121 17:08:53.310886 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcrln" event={"ID":"1c2cb648-2a9b-444b-aec4-d88dde8f951b","Type":"ContainerDied","Data":"3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24"} Jan 21 17:08:53 crc kubenswrapper[4890]: I0121 17:08:53.310914 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcrln" event={"ID":"1c2cb648-2a9b-444b-aec4-d88dde8f951b","Type":"ContainerDied","Data":"a17a61cbcf9fc398881b5b6728d53834e64963e69b2499f67cebc5f0b0be6ad4"} Jan 21 17:08:53 crc kubenswrapper[4890]: I0121 17:08:53.310930 4890 scope.go:117] "RemoveContainer" containerID="3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24" Jan 21 17:08:53 crc kubenswrapper[4890]: I0121 17:08:53.311081 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.557278 4890 scope.go:117] "RemoveContainer" containerID="959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.593656 4890 scope.go:117] "RemoveContainer" containerID="dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.620149 4890 scope.go:117] "RemoveContainer" containerID="3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24" Jan 21 17:08:54 crc kubenswrapper[4890]: E0121 17:08:54.622983 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24\": container with ID starting with 3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24 not found: ID does not exist" containerID="3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.623122 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24"} err="failed to get container status \"3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24\": rpc error: code = NotFound desc = could not find container \"3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24\": container with ID starting with 3566654dd67958db61f24a582eea42a9d81ab6d4ad495ed450bef6820c2efc24 not found: ID does not exist" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.623228 4890 scope.go:117] "RemoveContainer" containerID="959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b" Jan 21 17:08:54 crc kubenswrapper[4890]: E0121 17:08:54.623791 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b\": container with ID starting with 959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b not found: ID does not exist" containerID="959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.623838 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b"} err="failed to get container status \"959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b\": rpc error: code = NotFound desc = could not find container \"959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b\": container with ID starting with 959ab11ffe9f49c913c449522e38a84be632631c57ef6ed2d31a1a461b17db1b not found: ID does not exist" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.623906 4890 scope.go:117] "RemoveContainer" containerID="dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc" Jan 21 17:08:54 crc kubenswrapper[4890]: E0121 17:08:54.624251 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc\": container with ID starting with dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc not found: ID does not exist" containerID="dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc" Jan 21 17:08:54 crc kubenswrapper[4890]: I0121 17:08:54.624302 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc"} err="failed to get container status \"dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc\": rpc error: code = NotFound desc = could not find container \"dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc\": container with ID starting with dd09c6f0d9edc86fd5d02d9a111b90c38c47964ea4c8444a4403e0939d01dffc not found: ID does not exist" Jan 21 17:08:59 crc kubenswrapper[4890]: I0121 17:08:59.369776 4890 generic.go:334] "Generic (PLEG): container finished" podID="9fbdb918-d22d-45ce-aba1-b39a30d6e81d" containerID="75f4fd6885bfeef646601242b4a5c6ce2476d3e8dda2fccab05d78a0b71bc154" exitCode=0 Jan 21 17:08:59 crc kubenswrapper[4890]: I0121 17:08:59.369847 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" event={"ID":"9fbdb918-d22d-45ce-aba1-b39a30d6e81d","Type":"ContainerDied","Data":"75f4fd6885bfeef646601242b4a5c6ce2476d3e8dda2fccab05d78a0b71bc154"} Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.462231 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.495626 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qcqzq/crc-debug-t54kf"] Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.505504 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qcqzq/crc-debug-t54kf"] Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.582936 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-host\") pod \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.583257 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktx8m\" (UniqueName: \"kubernetes.io/projected/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-kube-api-access-ktx8m\") pod \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\" (UID: \"9fbdb918-d22d-45ce-aba1-b39a30d6e81d\") " Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.583019 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-host" (OuterVolumeSpecName: "host") pod "9fbdb918-d22d-45ce-aba1-b39a30d6e81d" (UID: "9fbdb918-d22d-45ce-aba1-b39a30d6e81d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.583940 4890 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.588330 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-kube-api-access-ktx8m" (OuterVolumeSpecName: "kube-api-access-ktx8m") pod "9fbdb918-d22d-45ce-aba1-b39a30d6e81d" (UID: "9fbdb918-d22d-45ce-aba1-b39a30d6e81d"). InnerVolumeSpecName "kube-api-access-ktx8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:09:00 crc kubenswrapper[4890]: I0121 17:09:00.685319 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktx8m\" (UniqueName: \"kubernetes.io/projected/9fbdb918-d22d-45ce-aba1-b39a30d6e81d-kube-api-access-ktx8m\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.387243 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea00cf2c4fc3dae9f528539645da312131606f200ce8fc7aeaa2db56cd2ac8b" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.387333 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-t54kf" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.642389 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qcqzq/crc-debug-n6prh"] Jan 21 17:09:01 crc kubenswrapper[4890]: E0121 17:09:01.643809 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="extract-content" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.643853 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="extract-content" Jan 21 17:09:01 crc kubenswrapper[4890]: E0121 17:09:01.643866 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="extract-utilities" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.643874 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="extract-utilities" Jan 21 17:09:01 crc kubenswrapper[4890]: E0121 17:09:01.643890 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbdb918-d22d-45ce-aba1-b39a30d6e81d" containerName="container-00" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.643898 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbdb918-d22d-45ce-aba1-b39a30d6e81d" containerName="container-00" Jan 21 17:09:01 crc kubenswrapper[4890]: E0121 17:09:01.643915 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="registry-server" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.643923 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="registry-server" Jan 21 17:09:01 crc kubenswrapper[4890]: E0121 17:09:01.643938 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="registry-server" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.643943 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="registry-server" Jan 21 17:09:01 crc kubenswrapper[4890]: E0121 17:09:01.643960 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="extract-content" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.643966 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="extract-content" Jan 21 17:09:01 crc kubenswrapper[4890]: E0121 17:09:01.643984 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="extract-utilities" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.643992 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="extract-utilities" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.644173 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" containerName="registry-server" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.644187 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f38178ea-1060-4dbc-b873-eba7afae165d" containerName="registry-server" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.644202 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbdb918-d22d-45ce-aba1-b39a30d6e81d" containerName="container-00" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.644923 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.802871 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58x5r\" (UniqueName: \"kubernetes.io/projected/a1880817-80a9-4595-83e0-7f5eee2d4a18-kube-api-access-58x5r\") pod \"crc-debug-n6prh\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.802971 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1880817-80a9-4595-83e0-7f5eee2d4a18-host\") pod \"crc-debug-n6prh\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.904326 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58x5r\" (UniqueName: \"kubernetes.io/projected/a1880817-80a9-4595-83e0-7f5eee2d4a18-kube-api-access-58x5r\") pod \"crc-debug-n6prh\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.904509 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1880817-80a9-4595-83e0-7f5eee2d4a18-host\") pod \"crc-debug-n6prh\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.904691 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1880817-80a9-4595-83e0-7f5eee2d4a18-host\") pod \"crc-debug-n6prh\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.923378 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbdb918-d22d-45ce-aba1-b39a30d6e81d" path="/var/lib/kubelet/pods/9fbdb918-d22d-45ce-aba1-b39a30d6e81d/volumes" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.929203 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58x5r\" (UniqueName: \"kubernetes.io/projected/a1880817-80a9-4595-83e0-7f5eee2d4a18-kube-api-access-58x5r\") pod \"crc-debug-n6prh\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:01 crc kubenswrapper[4890]: I0121 17:09:01.959922 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.396734 4890 generic.go:334] "Generic (PLEG): container finished" podID="a1880817-80a9-4595-83e0-7f5eee2d4a18" containerID="57fb40883e9b882aeb23f87c226cd605976ccf994d2f37c738595e9f2fe6688a" exitCode=1 Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.396814 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/crc-debug-n6prh" event={"ID":"a1880817-80a9-4595-83e0-7f5eee2d4a18","Type":"ContainerDied","Data":"57fb40883e9b882aeb23f87c226cd605976ccf994d2f37c738595e9f2fe6688a"} Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.396847 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/crc-debug-n6prh" event={"ID":"a1880817-80a9-4595-83e0-7f5eee2d4a18","Type":"ContainerStarted","Data":"063fb688b7628de2745536e996ee9c41e96458963f1e0da09c2677b8a8b94989"} Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.434473 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qcqzq/crc-debug-n6prh"] Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.440431 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qcqzq/crc-debug-n6prh"] Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.914364 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-r9vwb_19ea80f0-a74c-4d15-b17b-642efc0da703/cert-manager-controller/0.log" Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.935337 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-nmk9k_2266e2cc-a129-4ec1-af61-8fc445b56deb/cert-manager-cainjector/0.log" Jan 21 17:09:02 crc kubenswrapper[4890]: I0121 17:09:02.948263 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-7jls9_5b1f0ea1-055b-4788-82b4-ac0a20174220/cert-manager-webhook/0.log" Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.501747 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.546572 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1880817-80a9-4595-83e0-7f5eee2d4a18-host\") pod \"a1880817-80a9-4595-83e0-7f5eee2d4a18\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.546680 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58x5r\" (UniqueName: \"kubernetes.io/projected/a1880817-80a9-4595-83e0-7f5eee2d4a18-kube-api-access-58x5r\") pod \"a1880817-80a9-4595-83e0-7f5eee2d4a18\" (UID: \"a1880817-80a9-4595-83e0-7f5eee2d4a18\") " Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.546720 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1880817-80a9-4595-83e0-7f5eee2d4a18-host" (OuterVolumeSpecName: "host") pod "a1880817-80a9-4595-83e0-7f5eee2d4a18" (UID: "a1880817-80a9-4595-83e0-7f5eee2d4a18"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.547187 4890 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a1880817-80a9-4595-83e0-7f5eee2d4a18-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.564302 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1880817-80a9-4595-83e0-7f5eee2d4a18-kube-api-access-58x5r" (OuterVolumeSpecName: "kube-api-access-58x5r") pod "a1880817-80a9-4595-83e0-7f5eee2d4a18" (UID: "a1880817-80a9-4595-83e0-7f5eee2d4a18"). InnerVolumeSpecName "kube-api-access-58x5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.648984 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58x5r\" (UniqueName: \"kubernetes.io/projected/a1880817-80a9-4595-83e0-7f5eee2d4a18-kube-api-access-58x5r\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:03 crc kubenswrapper[4890]: I0121 17:09:03.925142 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1880817-80a9-4595-83e0-7f5eee2d4a18" path="/var/lib/kubelet/pods/a1880817-80a9-4595-83e0-7f5eee2d4a18/volumes" Jan 21 17:09:04 crc kubenswrapper[4890]: I0121 17:09:04.413408 4890 scope.go:117] "RemoveContainer" containerID="57fb40883e9b882aeb23f87c226cd605976ccf994d2f37c738595e9f2fe6688a" Jan 21 17:09:04 crc kubenswrapper[4890]: I0121 17:09:04.413458 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/crc-debug-n6prh" Jan 21 17:09:07 crc kubenswrapper[4890]: I0121 17:09:07.887034 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-c252f_11c1f035-25b4-4626-b3ee-ead3153e9987/nmstate-console-plugin/0.log" Jan 21 17:09:07 crc kubenswrapper[4890]: I0121 17:09:07.915706 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b2gxv_1655f2bc-b930-4937-94fb-2a9649e53af7/nmstate-handler/0.log" Jan 21 17:09:07 crc kubenswrapper[4890]: I0121 17:09:07.942331 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-fxsfm_f7841b98-e096-4147-ba53-3a18f33d4c6b/nmstate-metrics/0.log" Jan 21 17:09:07 crc kubenswrapper[4890]: I0121 17:09:07.953331 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-fxsfm_f7841b98-e096-4147-ba53-3a18f33d4c6b/kube-rbac-proxy/0.log" Jan 21 17:09:07 crc kubenswrapper[4890]: I0121 17:09:07.970755 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-n8zvx_829d883b-bd0f-40fb-bf6e-be3defd44399/nmstate-operator/0.log" Jan 21 17:09:07 crc kubenswrapper[4890]: I0121 17:09:07.987229 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t5bgt_a2e8d43c-364c-4f94-a394-619cf820048c/nmstate-webhook/0.log" Jan 21 17:09:18 crc kubenswrapper[4890]: I0121 17:09:18.298428 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z6xnk_cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9/controller/0.log" Jan 21 17:09:18 crc kubenswrapper[4890]: I0121 17:09:18.309189 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z6xnk_cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9/kube-rbac-proxy/0.log" Jan 21 17:09:18 crc kubenswrapper[4890]: I0121 17:09:18.329518 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/controller/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.171949 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/frr/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.182235 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/reloader/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.190052 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/frr-metrics/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.196383 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/kube-rbac-proxy/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.203518 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/kube-rbac-proxy-frr/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.211190 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-frr-files/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.218374 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-reloader/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.225263 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-metrics/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.235219 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dqvjk_7c1ba220-b8cb-40b8-ae09-e94423a126a5/frr-k8s-webhook-server/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.265966 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-66c89b74b6-f8w84_2a07a757-38cf-476f-acf2-758e953dc057/manager/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.281439 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c9dbfd5c5-cnkj6_9751dff3-fe38-4186-9f8c-e8aa4a08cf6d/webhook-server/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.755111 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f25hv_de8bead9-3c0f-4ba7-908b-c8c21763eb7c/speaker/0.log" Jan 21 17:09:20 crc kubenswrapper[4890]: I0121 17:09:20.762983 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f25hv_de8bead9-3c0f-4ba7-908b-c8c21763eb7c/kube-rbac-proxy/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.310974 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n_8a88852c-2a61-4ecb-abb0-5679e06d7c39/extract/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.319520 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n_8a88852c-2a61-4ecb-abb0-5679e06d7c39/util/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.350447 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afn74n_8a88852c-2a61-4ecb-abb0-5679e06d7c39/pull/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.366957 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9_4d8a46ee-3e93-4f73-a9bf-f0f797698cc8/extract/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.374495 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9_4d8a46ee-3e93-4f73-a9bf-f0f797698cc8/util/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.382141 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqqrx9_4d8a46ee-3e93-4f73-a9bf-f0f797698cc8/pull/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.392980 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2_28f42f06-2b26-4f38-9fb3-653acad943d2/extract/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.401890 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2_28f42f06-2b26-4f38-9fb3-653acad943d2/util/0.log" Jan 21 17:09:23 crc kubenswrapper[4890]: I0121 17:09:23.418818 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rvgs2_28f42f06-2b26-4f38-9fb3-653acad943d2/pull/0.log" Jan 21 17:09:24 crc kubenswrapper[4890]: I0121 17:09:24.444984 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84nbz_3d76bd25-e92c-4f05-bdb1-149b09a31d2f/registry-server/0.log" Jan 21 17:09:24 crc kubenswrapper[4890]: I0121 17:09:24.449984 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84nbz_3d76bd25-e92c-4f05-bdb1-149b09a31d2f/extract-utilities/0.log" Jan 21 17:09:24 crc kubenswrapper[4890]: I0121 17:09:24.456714 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-84nbz_3d76bd25-e92c-4f05-bdb1-149b09a31d2f/extract-content/0.log" Jan 21 17:09:24 crc kubenswrapper[4890]: I0121 17:09:24.543291 4890 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod1c2cb648-2a9b-444b-aec4-d88dde8f951b"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1c2cb648-2a9b-444b-aec4-d88dde8f951b] : Timed out while waiting for systemd to remove kubepods-burstable-pod1c2cb648_2a9b_444b_aec4_d88dde8f951b.slice" Jan 21 17:09:24 crc kubenswrapper[4890]: E0121 17:09:24.543345 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod1c2cb648-2a9b-444b-aec4-d88dde8f951b] : unable to destroy cgroup paths for cgroup [kubepods burstable pod1c2cb648-2a9b-444b-aec4-d88dde8f951b] : Timed out while waiting for systemd to remove kubepods-burstable-pod1c2cb648_2a9b_444b_aec4_d88dde8f951b.slice" pod="openshift-marketplace/redhat-marketplace-lcrln" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" Jan 21 17:09:24 crc kubenswrapper[4890]: I0121 17:09:24.583824 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcrln" Jan 21 17:09:24 crc kubenswrapper[4890]: I0121 17:09:24.633271 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcrln"] Jan 21 17:09:24 crc kubenswrapper[4890]: I0121 17:09:24.638885 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcrln"] Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.621276 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x7prm_eed3ba63-937e-4ef3-9eda-75221a7ee3c4/registry-server/0.log" Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.628666 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x7prm_eed3ba63-937e-4ef3-9eda-75221a7ee3c4/extract-utilities/0.log" Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.636269 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x7prm_eed3ba63-937e-4ef3-9eda-75221a7ee3c4/extract-content/0.log" Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.654041 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2g5nx_7b1e522c-d015-4945-9062-183e67bb8239/marketplace-operator/0.log" Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.866039 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9l4rd_43ba982c-8921-4fd0-96af-2522d1323265/registry-server/0.log" Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.870641 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9l4rd_43ba982c-8921-4fd0-96af-2522d1323265/extract-utilities/0.log" Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.878471 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9l4rd_43ba982c-8921-4fd0-96af-2522d1323265/extract-content/0.log" Jan 21 17:09:25 crc kubenswrapper[4890]: I0121 17:09:25.924075 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2cb648-2a9b-444b-aec4-d88dde8f951b" path="/var/lib/kubelet/pods/1c2cb648-2a9b-444b-aec4-d88dde8f951b/volumes" Jan 21 17:09:26 crc kubenswrapper[4890]: I0121 17:09:26.678726 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jnlgx_1341039e-0f34-43a0-8fc3-36a65ecb8505/registry-server/0.log" Jan 21 17:09:26 crc kubenswrapper[4890]: I0121 17:09:26.683779 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jnlgx_1341039e-0f34-43a0-8fc3-36a65ecb8505/extract-utilities/0.log" Jan 21 17:09:26 crc kubenswrapper[4890]: I0121 17:09:26.691500 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jnlgx_1341039e-0f34-43a0-8fc3-36a65ecb8505/extract-content/0.log" Jan 21 17:10:09 crc kubenswrapper[4890]: I0121 17:10:09.827736 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-r9vwb_19ea80f0-a74c-4d15-b17b-642efc0da703/cert-manager-controller/0.log" Jan 21 17:10:09 crc kubenswrapper[4890]: I0121 17:10:09.854953 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-nmk9k_2266e2cc-a129-4ec1-af61-8fc445b56deb/cert-manager-cainjector/0.log" Jan 21 17:10:09 crc kubenswrapper[4890]: I0121 17:10:09.870755 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-7jls9_5b1f0ea1-055b-4788-82b4-ac0a20174220/cert-manager-webhook/0.log" Jan 21 17:10:10 crc kubenswrapper[4890]: I0121 17:10:10.131103 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z6xnk_cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9/controller/0.log" Jan 21 17:10:10 crc kubenswrapper[4890]: I0121 17:10:10.140157 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z6xnk_cbce4dcb-58d8-4f4e-8be6-538ddd3e8da9/kube-rbac-proxy/0.log" Jan 21 17:10:10 crc kubenswrapper[4890]: I0121 17:10:10.169145 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/controller/0.log" Jan 21 17:10:10 crc kubenswrapper[4890]: I0121 17:10:10.833847 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/extract/0.log" Jan 21 17:10:10 crc kubenswrapper[4890]: I0121 17:10:10.843962 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/util/0.log" Jan 21 17:10:10 crc kubenswrapper[4890]: I0121 17:10:10.856542 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/pull/0.log" Jan 21 17:10:10 crc kubenswrapper[4890]: I0121 17:10:10.977984 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-f92ld_2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.040275 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-5c7zf_0f4bb54d-23a1-4b41-995f-d7affd9cd504/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.059819 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-l7ndf_4319998f-d413-4412-bffd-7123d46bce19/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.172764 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h6r95_44fcf69f-7131-43c4-9303-f5636c294644/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.184923 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-v7zt4_8791802a-0f5d-4d66-a19b-bf2b373ddd56/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.200264 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pqzjj_91df512f-6657-44f1-b643-c18778e5d159/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.776124 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-bbwtr_6dd75ffe-4c90-493d-b5af-313056532562/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.790507 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-xvfb4_41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.900172 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-rklqr_d1fd4cb9-f562-48b3-b829-55f48fc8a414/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.911124 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-jcfgv_ab7d4301-6caa-4a1e-a634-e4a355271b68/manager/0.log" Jan 21 17:10:11 crc kubenswrapper[4890]: I0121 17:10:11.995856 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-nfvzq_88bf9325-183a-4b37-8278-fdf6a95edf3c/manager/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.078405 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-wmm44_cb112e2e-1c3b-4701-87ec-dee15131d2a9/manager/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.206082 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-4lhwm_eeba017a-ca09-444f-a4c6-895ec31b914b/manager/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.223892 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nwlnf_4950c09f-4cbd-49e4-906f-e4451c610111/manager/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.247461 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dzvgss_daa2bbb5-55a8-4920-9109-45bcd643bd9f/manager/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.414993 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-x45bd_e99a0335-92f6-4871-983a-b61c4d78256e/operator/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.514666 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/frr/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.525616 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/reloader/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.530287 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/frr-metrics/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.538514 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/kube-rbac-proxy/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.546375 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/kube-rbac-proxy-frr/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.555134 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-frr-files/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.567649 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-reloader/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.575381 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v7hrv_d1f32d47-a574-442b-be16-9cdb86b30aa8/cp-metrics/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.598445 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dqvjk_7c1ba220-b8cb-40b8-ae09-e94423a126a5/frr-k8s-webhook-server/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.642247 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-66c89b74b6-f8w84_2a07a757-38cf-476f-acf2-758e953dc057/manager/0.log" Jan 21 17:10:12 crc kubenswrapper[4890]: I0121 17:10:12.654609 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c9dbfd5c5-cnkj6_9751dff3-fe38-4186-9f8c-e8aa4a08cf6d/webhook-server/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.234463 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f25hv_de8bead9-3c0f-4ba7-908b-c8c21763eb7c/speaker/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.244406 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f25hv_de8bead9-3c0f-4ba7-908b-c8c21763eb7c/kube-rbac-proxy/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.619093 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-tqb8d_41b6e8d7-5b8e-4953-bb8c-af061e0fda60/manager/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.733029 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4dn5g_2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9/registry-server/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.819415 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9nwtw_f176bee8-4c10-4d65-bd9c-5e95bdc707c6/manager/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.843824 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-8qwg4_59304f72-3a8b-460b-989d-706b9e898d76/manager/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.867838 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8rlhm_cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f/operator/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.895365 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-6h8pt_acf11348-ddc8-494c-8ecc-ad1f5f44366f/manager/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.975832 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-sqqwl_105a410d-5ae8-44ad-9e48-e7cd00ae3c27/manager/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.984568 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cckzf_03a78be9-bd85-449f-93b6-1379195280c0/manager/0.log" Jan 21 17:10:13 crc kubenswrapper[4890]: I0121 17:10:13.993385 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-x2dl5_435e0e5d-88a8-4737-aa7d-cefffc292c23/manager/0.log" Jan 21 17:10:14 crc kubenswrapper[4890]: I0121 17:10:14.109321 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-r9vwb_19ea80f0-a74c-4d15-b17b-642efc0da703/cert-manager-controller/0.log" Jan 21 17:10:14 crc kubenswrapper[4890]: I0121 17:10:14.127290 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-nmk9k_2266e2cc-a129-4ec1-af61-8fc445b56deb/cert-manager-cainjector/0.log" Jan 21 17:10:14 crc kubenswrapper[4890]: I0121 17:10:14.136609 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-7jls9_5b1f0ea1-055b-4788-82b4-ac0a20174220/cert-manager-webhook/0.log" Jan 21 17:10:14 crc kubenswrapper[4890]: I0121 17:10:14.854465 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fsz8b_8897f3cf-e9a2-40ce-9353-018d197f47b1/control-plane-machine-set-operator/0.log" Jan 21 17:10:14 crc kubenswrapper[4890]: I0121 17:10:14.865814 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vd9wg_513d9ec4-2b91-4609-ba1a-0e6f0b551d1a/kube-rbac-proxy/0.log" Jan 21 17:10:14 crc kubenswrapper[4890]: I0121 17:10:14.879342 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vd9wg_513d9ec4-2b91-4609-ba1a-0e6f0b551d1a/machine-api-operator/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.366451 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-c252f_11c1f035-25b4-4626-b3ee-ead3153e9987/nmstate-console-plugin/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.389183 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b2gxv_1655f2bc-b930-4937-94fb-2a9649e53af7/nmstate-handler/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.401333 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-fxsfm_f7841b98-e096-4147-ba53-3a18f33d4c6b/nmstate-metrics/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.416532 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-fxsfm_f7841b98-e096-4147-ba53-3a18f33d4c6b/kube-rbac-proxy/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.440675 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-n8zvx_829d883b-bd0f-40fb-bf6e-be3defd44399/nmstate-operator/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.454079 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t5bgt_a2e8d43c-364c-4f94-a394-619cf820048c/nmstate-webhook/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.598450 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/extract/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.605874 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/util/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.612881 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2ezpjh6_b3a9acd8-9d34-419d-861b-232a4de671b9/pull/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.696735 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-f92ld_2c2f7bc7-66b1-4a91-8a14-9d7d2a00a538/manager/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.742792 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-5c7zf_0f4bb54d-23a1-4b41-995f-d7affd9cd504/manager/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.757951 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-l7ndf_4319998f-d413-4412-bffd-7123d46bce19/manager/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.849748 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h6r95_44fcf69f-7131-43c4-9303-f5636c294644/manager/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.860407 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-v7zt4_8791802a-0f5d-4d66-a19b-bf2b373ddd56/manager/0.log" Jan 21 17:10:15 crc kubenswrapper[4890]: I0121 17:10:15.868336 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pqzjj_91df512f-6657-44f1-b643-c18778e5d159/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.229910 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-bbwtr_6dd75ffe-4c90-493d-b5af-313056532562/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.243554 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-xvfb4_41dc7f8d-37f0-4ea4-9f9c-75d563ce3a14/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.331722 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-rklqr_d1fd4cb9-f562-48b3-b829-55f48fc8a414/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.345338 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-jcfgv_ab7d4301-6caa-4a1e-a634-e4a355271b68/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.396764 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-nfvzq_88bf9325-183a-4b37-8278-fdf6a95edf3c/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.443038 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-wmm44_cb112e2e-1c3b-4701-87ec-dee15131d2a9/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.549875 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-4lhwm_eeba017a-ca09-444f-a4c6-895ec31b914b/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.559777 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nwlnf_4950c09f-4cbd-49e4-906f-e4451c610111/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.572691 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dzvgss_daa2bbb5-55a8-4920-9109-45bcd643bd9f/manager/0.log" Jan 21 17:10:16 crc kubenswrapper[4890]: I0121 17:10:16.692275 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-x45bd_e99a0335-92f6-4871-983a-b61c4d78256e/operator/0.log" Jan 21 17:10:17 crc kubenswrapper[4890]: I0121 17:10:17.755065 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-tqb8d_41b6e8d7-5b8e-4953-bb8c-af061e0fda60/manager/0.log" Jan 21 17:10:17 crc kubenswrapper[4890]: I0121 17:10:17.846498 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4dn5g_2dfe7b61-5b89-47ef-b9cf-73de9abaa1e9/registry-server/0.log" Jan 21 17:10:17 crc kubenswrapper[4890]: I0121 17:10:17.923250 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9nwtw_f176bee8-4c10-4d65-bd9c-5e95bdc707c6/manager/0.log" Jan 21 17:10:17 crc kubenswrapper[4890]: I0121 17:10:17.946514 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-8qwg4_59304f72-3a8b-460b-989d-706b9e898d76/manager/0.log" Jan 21 17:10:17 crc kubenswrapper[4890]: I0121 17:10:17.984512 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8rlhm_cb4b3ed1-b2b7-4f37-b5c0-9eed87ee074f/operator/0.log" Jan 21 17:10:18 crc kubenswrapper[4890]: I0121 17:10:18.018177 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-6h8pt_acf11348-ddc8-494c-8ecc-ad1f5f44366f/manager/0.log" Jan 21 17:10:18 crc kubenswrapper[4890]: I0121 17:10:18.104914 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-sqqwl_105a410d-5ae8-44ad-9e48-e7cd00ae3c27/manager/0.log" Jan 21 17:10:18 crc kubenswrapper[4890]: I0121 17:10:18.114300 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cckzf_03a78be9-bd85-449f-93b6-1379195280c0/manager/0.log" Jan 21 17:10:18 crc kubenswrapper[4890]: I0121 17:10:18.124615 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-x2dl5_435e0e5d-88a8-4737-aa7d-cefffc292c23/manager/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.067471 4890 scope.go:117] "RemoveContainer" containerID="206d1a5eca57087274f0adb0394363fb034dcbf7d265fc3e4da99fceeb89bb25" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.092435 4890 scope.go:117] "RemoveContainer" containerID="3b21d9a91e998e4bc7645a8505a2f99d53bebfd03763083d3c43d93f49def3b3" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.648120 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-msckx_9260bc10-0bda-4046-9b76-78b103f176be/kube-multus-additional-cni-plugins/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.656060 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-msckx_9260bc10-0bda-4046-9b76-78b103f176be/egress-router-binary-copy/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.663454 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-msckx_9260bc10-0bda-4046-9b76-78b103f176be/cni-plugins/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.670286 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-msckx_9260bc10-0bda-4046-9b76-78b103f176be/bond-cni-plugin/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.677193 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-msckx_9260bc10-0bda-4046-9b76-78b103f176be/routeoverride-cni/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.683719 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-msckx_9260bc10-0bda-4046-9b76-78b103f176be/whereabouts-cni-bincopy/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.690268 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-msckx_9260bc10-0bda-4046-9b76-78b103f176be/whereabouts-cni/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.720328 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-vkkcm_2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a/multus-admission-controller/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.727054 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-vkkcm_2a9189f4-78cf-4f3e-8a9d-cdcc427d0c7a/kube-rbac-proxy/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.776868 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/2.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.852108 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pflt5_eba30f20-e5ad-4888-850d-1715115ab8bd/kube-multus/3.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.899686 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-j9mfr_a86abbe4-e7c5-4a3e-a8d7-02d82267ded6/network-metrics-daemon/0.log" Jan 21 17:10:19 crc kubenswrapper[4890]: I0121 17:10:19.905301 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-j9mfr_a86abbe4-e7c5-4a3e-a8d7-02d82267ded6/kube-rbac-proxy/0.log" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.010741 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9h72j"] Jan 21 17:10:29 crc kubenswrapper[4890]: E0121 17:10:29.011921 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1880817-80a9-4595-83e0-7f5eee2d4a18" containerName="container-00" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.011937 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1880817-80a9-4595-83e0-7f5eee2d4a18" containerName="container-00" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.012126 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1880817-80a9-4595-83e0-7f5eee2d4a18" containerName="container-00" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.017324 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.020648 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h72j"] Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.073850 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-catalog-content\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.074009 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-utilities\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.074098 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzqk\" (UniqueName: \"kubernetes.io/projected/ba974064-ff01-4645-8b5d-0e0b541059f5-kube-api-access-cjzqk\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.175374 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-catalog-content\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.175506 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-utilities\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.175563 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjzqk\" (UniqueName: \"kubernetes.io/projected/ba974064-ff01-4645-8b5d-0e0b541059f5-kube-api-access-cjzqk\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.176005 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-catalog-content\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.176074 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-utilities\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.198288 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjzqk\" (UniqueName: \"kubernetes.io/projected/ba974064-ff01-4645-8b5d-0e0b541059f5-kube-api-access-cjzqk\") pod \"certified-operators-9h72j\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.337734 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:29 crc kubenswrapper[4890]: I0121 17:10:29.866783 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h72j"] Jan 21 17:10:29 crc kubenswrapper[4890]: W0121 17:10:29.876410 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba974064_ff01_4645_8b5d_0e0b541059f5.slice/crio-9315b5105b29ca18b8cd94b2dc957371862510090639e0cc727b896e54092bb0 WatchSource:0}: Error finding container 9315b5105b29ca18b8cd94b2dc957371862510090639e0cc727b896e54092bb0: Status 404 returned error can't find the container with id 9315b5105b29ca18b8cd94b2dc957371862510090639e0cc727b896e54092bb0 Jan 21 17:10:30 crc kubenswrapper[4890]: I0121 17:10:30.124395 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h72j" event={"ID":"ba974064-ff01-4645-8b5d-0e0b541059f5","Type":"ContainerStarted","Data":"9315b5105b29ca18b8cd94b2dc957371862510090639e0cc727b896e54092bb0"} Jan 21 17:10:31 crc kubenswrapper[4890]: I0121 17:10:31.136823 4890 generic.go:334] "Generic (PLEG): container finished" podID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerID="2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666" exitCode=0 Jan 21 17:10:31 crc kubenswrapper[4890]: I0121 17:10:31.137420 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h72j" event={"ID":"ba974064-ff01-4645-8b5d-0e0b541059f5","Type":"ContainerDied","Data":"2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666"} Jan 21 17:10:31 crc kubenswrapper[4890]: I0121 17:10:31.139510 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:10:32 crc kubenswrapper[4890]: I0121 17:10:32.145875 4890 generic.go:334] "Generic (PLEG): container finished" podID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerID="a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78" exitCode=0 Jan 21 17:10:32 crc kubenswrapper[4890]: I0121 17:10:32.146184 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h72j" event={"ID":"ba974064-ff01-4645-8b5d-0e0b541059f5","Type":"ContainerDied","Data":"a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78"} Jan 21 17:10:33 crc kubenswrapper[4890]: I0121 17:10:33.158911 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h72j" event={"ID":"ba974064-ff01-4645-8b5d-0e0b541059f5","Type":"ContainerStarted","Data":"0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8"} Jan 21 17:10:33 crc kubenswrapper[4890]: I0121 17:10:33.184851 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9h72j" podStartSLOduration=3.7714530010000002 podStartE2EDuration="5.184832961s" podCreationTimestamp="2026-01-21 17:10:28 +0000 UTC" firstStartedPulling="2026-01-21 17:10:31.139226594 +0000 UTC m=+5913.500669013" lastFinishedPulling="2026-01-21 17:10:32.552606544 +0000 UTC m=+5914.914048973" observedRunningTime="2026-01-21 17:10:33.177590481 +0000 UTC m=+5915.539032900" watchObservedRunningTime="2026-01-21 17:10:33.184832961 +0000 UTC m=+5915.546275370" Jan 21 17:10:39 crc kubenswrapper[4890]: I0121 17:10:39.338661 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:39 crc kubenswrapper[4890]: I0121 17:10:39.339301 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:39 crc kubenswrapper[4890]: I0121 17:10:39.394030 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:40 crc kubenswrapper[4890]: I0121 17:10:40.264977 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:40 crc kubenswrapper[4890]: I0121 17:10:40.321060 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h72j"] Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.235012 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9h72j" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="registry-server" containerID="cri-o://0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8" gracePeriod=2 Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.615172 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.762517 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-catalog-content\") pod \"ba974064-ff01-4645-8b5d-0e0b541059f5\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.762615 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-utilities\") pod \"ba974064-ff01-4645-8b5d-0e0b541059f5\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.762759 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjzqk\" (UniqueName: \"kubernetes.io/projected/ba974064-ff01-4645-8b5d-0e0b541059f5-kube-api-access-cjzqk\") pod \"ba974064-ff01-4645-8b5d-0e0b541059f5\" (UID: \"ba974064-ff01-4645-8b5d-0e0b541059f5\") " Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.763920 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-utilities" (OuterVolumeSpecName: "utilities") pod "ba974064-ff01-4645-8b5d-0e0b541059f5" (UID: "ba974064-ff01-4645-8b5d-0e0b541059f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.783785 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba974064-ff01-4645-8b5d-0e0b541059f5-kube-api-access-cjzqk" (OuterVolumeSpecName: "kube-api-access-cjzqk") pod "ba974064-ff01-4645-8b5d-0e0b541059f5" (UID: "ba974064-ff01-4645-8b5d-0e0b541059f5"). InnerVolumeSpecName "kube-api-access-cjzqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.866033 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjzqk\" (UniqueName: \"kubernetes.io/projected/ba974064-ff01-4645-8b5d-0e0b541059f5-kube-api-access-cjzqk\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:42 crc kubenswrapper[4890]: I0121 17:10:42.866096 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.078013 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba974064-ff01-4645-8b5d-0e0b541059f5" (UID: "ba974064-ff01-4645-8b5d-0e0b541059f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.171026 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba974064-ff01-4645-8b5d-0e0b541059f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.246206 4890 generic.go:334] "Generic (PLEG): container finished" podID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerID="0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8" exitCode=0 Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.246285 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h72j" event={"ID":"ba974064-ff01-4645-8b5d-0e0b541059f5","Type":"ContainerDied","Data":"0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8"} Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.246328 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h72j" event={"ID":"ba974064-ff01-4645-8b5d-0e0b541059f5","Type":"ContainerDied","Data":"9315b5105b29ca18b8cd94b2dc957371862510090639e0cc727b896e54092bb0"} Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.246566 4890 scope.go:117] "RemoveContainer" containerID="0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.246775 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h72j" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.283905 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h72j"] Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.290188 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9h72j"] Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.291663 4890 scope.go:117] "RemoveContainer" containerID="a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.308520 4890 scope.go:117] "RemoveContainer" containerID="2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.351214 4890 scope.go:117] "RemoveContainer" containerID="0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8" Jan 21 17:10:43 crc kubenswrapper[4890]: E0121 17:10:43.351928 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8\": container with ID starting with 0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8 not found: ID does not exist" containerID="0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.351988 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8"} err="failed to get container status \"0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8\": rpc error: code = NotFound desc = could not find container \"0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8\": container with ID starting with 0143af3bb0a842f194a4057abe5824a3b3fc8426e79f1c52dcaa5a78739df4c8 not found: ID does not exist" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.352032 4890 scope.go:117] "RemoveContainer" containerID="a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78" Jan 21 17:10:43 crc kubenswrapper[4890]: E0121 17:10:43.352444 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78\": container with ID starting with a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78 not found: ID does not exist" containerID="a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.352480 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78"} err="failed to get container status \"a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78\": rpc error: code = NotFound desc = could not find container \"a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78\": container with ID starting with a623df8d584c95742f6bbf9bdb0232f031d282893926cb5e4d08e8d11fe95f78 not found: ID does not exist" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.352504 4890 scope.go:117] "RemoveContainer" containerID="2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666" Jan 21 17:10:43 crc kubenswrapper[4890]: E0121 17:10:43.352883 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666\": container with ID starting with 2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666 not found: ID does not exist" containerID="2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.352945 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666"} err="failed to get container status \"2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666\": rpc error: code = NotFound desc = could not find container \"2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666\": container with ID starting with 2aad6701594fa38606da584ef6d4aa6e54a07cb0e5ef936fc944371279e41666 not found: ID does not exist" Jan 21 17:10:43 crc kubenswrapper[4890]: I0121 17:10:43.923663 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" path="/var/lib/kubelet/pods/ba974064-ff01-4645-8b5d-0e0b541059f5/volumes" Jan 21 17:10:48 crc kubenswrapper[4890]: I0121 17:10:48.761946 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:48 crc kubenswrapper[4890]: I0121 17:10:48.762686 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:18 crc kubenswrapper[4890]: I0121 17:11:18.761896 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:11:18 crc kubenswrapper[4890]: I0121 17:11:18.762435 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:19 crc kubenswrapper[4890]: I0121 17:11:19.171311 4890 scope.go:117] "RemoveContainer" containerID="4d3cfa570544df055cdfe5eb32ce1dfcaeda29dc4502eaa778137897415171db" Jan 21 17:11:48 crc kubenswrapper[4890]: I0121 17:11:48.818414 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:11:48 crc kubenswrapper[4890]: I0121 17:11:48.819035 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:48 crc kubenswrapper[4890]: I0121 17:11:48.819102 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 17:11:48 crc kubenswrapper[4890]: I0121 17:11:48.819877 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:11:48 crc kubenswrapper[4890]: I0121 17:11:48.819945 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" gracePeriod=600 Jan 21 17:11:49 crc kubenswrapper[4890]: E0121 17:11:49.516188 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:11:49 crc kubenswrapper[4890]: I0121 17:11:49.838447 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" exitCode=0 Jan 21 17:11:49 crc kubenswrapper[4890]: I0121 17:11:49.838496 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537"} Jan 21 17:11:49 crc kubenswrapper[4890]: I0121 17:11:49.838566 4890 scope.go:117] "RemoveContainer" containerID="441e2804322bd206b49ddc5d039d873df556718ce8ad59e56fedf064eaf06c01" Jan 21 17:11:49 crc kubenswrapper[4890]: I0121 17:11:49.850064 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:11:49 crc kubenswrapper[4890]: E0121 17:11:49.850872 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:12:05 crc kubenswrapper[4890]: I0121 17:12:04.915402 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:12:05 crc kubenswrapper[4890]: E0121 17:12:04.916577 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:12:17 crc kubenswrapper[4890]: I0121 17:12:17.931753 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:12:17 crc kubenswrapper[4890]: E0121 17:12:17.932652 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:12:32 crc kubenswrapper[4890]: I0121 17:12:32.914882 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:12:32 crc kubenswrapper[4890]: E0121 17:12:32.915732 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:12:47 crc kubenswrapper[4890]: I0121 17:12:47.931982 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:12:47 crc kubenswrapper[4890]: E0121 17:12:47.932848 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.781454 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hfvvl"] Jan 21 17:12:50 crc kubenswrapper[4890]: E0121 17:12:50.782740 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="extract-utilities" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.782781 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="extract-utilities" Jan 21 17:12:50 crc kubenswrapper[4890]: E0121 17:12:50.782838 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="extract-content" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.782859 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="extract-content" Jan 21 17:12:50 crc kubenswrapper[4890]: E0121 17:12:50.782903 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="registry-server" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.782922 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="registry-server" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.783468 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba974064-ff01-4645-8b5d-0e0b541059f5" containerName="registry-server" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.788082 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.790160 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfvvl"] Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.895127 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-catalog-content\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.895227 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdb6\" (UniqueName: \"kubernetes.io/projected/10a26313-530a-4164-a274-a0dad3abff5d-kube-api-access-npdb6\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.895574 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-utilities\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.997001 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-utilities\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.997737 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-utilities\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.997826 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-catalog-content\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.998203 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-catalog-content\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:50 crc kubenswrapper[4890]: I0121 17:12:50.999214 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdb6\" (UniqueName: \"kubernetes.io/projected/10a26313-530a-4164-a274-a0dad3abff5d-kube-api-access-npdb6\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:51 crc kubenswrapper[4890]: I0121 17:12:51.026519 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdb6\" (UniqueName: \"kubernetes.io/projected/10a26313-530a-4164-a274-a0dad3abff5d-kube-api-access-npdb6\") pod \"community-operators-hfvvl\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:51 crc kubenswrapper[4890]: I0121 17:12:51.129065 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:12:51 crc kubenswrapper[4890]: I0121 17:12:51.653121 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfvvl"] Jan 21 17:12:52 crc kubenswrapper[4890]: I0121 17:12:52.368170 4890 generic.go:334] "Generic (PLEG): container finished" podID="10a26313-530a-4164-a274-a0dad3abff5d" containerID="01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5" exitCode=0 Jan 21 17:12:52 crc kubenswrapper[4890]: I0121 17:12:52.368235 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvvl" event={"ID":"10a26313-530a-4164-a274-a0dad3abff5d","Type":"ContainerDied","Data":"01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5"} Jan 21 17:12:52 crc kubenswrapper[4890]: I0121 17:12:52.368501 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvvl" event={"ID":"10a26313-530a-4164-a274-a0dad3abff5d","Type":"ContainerStarted","Data":"9e37ac375e2f210d2891891992825404a1ff1a90b074fe738d40ecc920cd9f65"} Jan 21 17:12:54 crc kubenswrapper[4890]: I0121 17:12:54.385822 4890 generic.go:334] "Generic (PLEG): container finished" podID="10a26313-530a-4164-a274-a0dad3abff5d" containerID="f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67" exitCode=0 Jan 21 17:12:54 crc kubenswrapper[4890]: I0121 17:12:54.385896 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvvl" event={"ID":"10a26313-530a-4164-a274-a0dad3abff5d","Type":"ContainerDied","Data":"f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67"} Jan 21 17:12:55 crc kubenswrapper[4890]: I0121 17:12:55.396838 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvvl" event={"ID":"10a26313-530a-4164-a274-a0dad3abff5d","Type":"ContainerStarted","Data":"6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632"} Jan 21 17:12:55 crc kubenswrapper[4890]: I0121 17:12:55.420839 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hfvvl" podStartSLOduration=3.013221086 podStartE2EDuration="5.420804877s" podCreationTimestamp="2026-01-21 17:12:50 +0000 UTC" firstStartedPulling="2026-01-21 17:12:52.370385027 +0000 UTC m=+6054.731827436" lastFinishedPulling="2026-01-21 17:12:54.777968818 +0000 UTC m=+6057.139411227" observedRunningTime="2026-01-21 17:12:55.417786442 +0000 UTC m=+6057.779228871" watchObservedRunningTime="2026-01-21 17:12:55.420804877 +0000 UTC m=+6057.782247286" Jan 21 17:13:01 crc kubenswrapper[4890]: I0121 17:13:01.129610 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:13:01 crc kubenswrapper[4890]: I0121 17:13:01.130263 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:13:01 crc kubenswrapper[4890]: I0121 17:13:01.196399 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:13:01 crc kubenswrapper[4890]: I0121 17:13:01.482613 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:13:01 crc kubenswrapper[4890]: I0121 17:13:01.525338 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfvvl"] Jan 21 17:13:02 crc kubenswrapper[4890]: I0121 17:13:02.915203 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:13:02 crc kubenswrapper[4890]: E0121 17:13:02.916925 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:13:03 crc kubenswrapper[4890]: I0121 17:13:03.455177 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hfvvl" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="registry-server" containerID="cri-o://6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632" gracePeriod=2 Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.392459 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.464058 4890 generic.go:334] "Generic (PLEG): container finished" podID="10a26313-530a-4164-a274-a0dad3abff5d" containerID="6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632" exitCode=0 Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.464107 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvvl" event={"ID":"10a26313-530a-4164-a274-a0dad3abff5d","Type":"ContainerDied","Data":"6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632"} Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.464139 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvvl" event={"ID":"10a26313-530a-4164-a274-a0dad3abff5d","Type":"ContainerDied","Data":"9e37ac375e2f210d2891891992825404a1ff1a90b074fe738d40ecc920cd9f65"} Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.464156 4890 scope.go:117] "RemoveContainer" containerID="6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.464173 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfvvl" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.480579 4890 scope.go:117] "RemoveContainer" containerID="f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.514341 4890 scope.go:117] "RemoveContainer" containerID="01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.542931 4890 scope.go:117] "RemoveContainer" containerID="6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632" Jan 21 17:13:04 crc kubenswrapper[4890]: E0121 17:13:04.543510 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632\": container with ID starting with 6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632 not found: ID does not exist" containerID="6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.543554 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632"} err="failed to get container status \"6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632\": rpc error: code = NotFound desc = could not find container \"6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632\": container with ID starting with 6e5aff6dffd79bad9f2459851adcf54a03c1e202b1b8954664bb98eaad57d632 not found: ID does not exist" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.543584 4890 scope.go:117] "RemoveContainer" containerID="f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67" Jan 21 17:13:04 crc kubenswrapper[4890]: E0121 17:13:04.544046 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67\": container with ID starting with f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67 not found: ID does not exist" containerID="f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.544096 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67"} err="failed to get container status \"f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67\": rpc error: code = NotFound desc = could not find container \"f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67\": container with ID starting with f19fcefc79f6d34b29c7ef78ad213f2ae274b73dfc8f8dbf617de002f6ed9d67 not found: ID does not exist" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.544113 4890 scope.go:117] "RemoveContainer" containerID="01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5" Jan 21 17:13:04 crc kubenswrapper[4890]: E0121 17:13:04.544485 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5\": container with ID starting with 01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5 not found: ID does not exist" containerID="01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.544511 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5"} err="failed to get container status \"01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5\": rpc error: code = NotFound desc = could not find container \"01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5\": container with ID starting with 01260ccc85fea0e4f6f63d328edbc1052f01854963146a4d2f21a65f4f428bd5 not found: ID does not exist" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.558196 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-utilities\") pod \"10a26313-530a-4164-a274-a0dad3abff5d\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.558249 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-catalog-content\") pod \"10a26313-530a-4164-a274-a0dad3abff5d\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.558449 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npdb6\" (UniqueName: \"kubernetes.io/projected/10a26313-530a-4164-a274-a0dad3abff5d-kube-api-access-npdb6\") pod \"10a26313-530a-4164-a274-a0dad3abff5d\" (UID: \"10a26313-530a-4164-a274-a0dad3abff5d\") " Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.560004 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-utilities" (OuterVolumeSpecName: "utilities") pod "10a26313-530a-4164-a274-a0dad3abff5d" (UID: "10a26313-530a-4164-a274-a0dad3abff5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.563772 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a26313-530a-4164-a274-a0dad3abff5d-kube-api-access-npdb6" (OuterVolumeSpecName: "kube-api-access-npdb6") pod "10a26313-530a-4164-a274-a0dad3abff5d" (UID: "10a26313-530a-4164-a274-a0dad3abff5d"). InnerVolumeSpecName "kube-api-access-npdb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.612234 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10a26313-530a-4164-a274-a0dad3abff5d" (UID: "10a26313-530a-4164-a274-a0dad3abff5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.660920 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.660955 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a26313-530a-4164-a274-a0dad3abff5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.660966 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npdb6\" (UniqueName: \"kubernetes.io/projected/10a26313-530a-4164-a274-a0dad3abff5d-kube-api-access-npdb6\") on node \"crc\" DevicePath \"\"" Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.797962 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfvvl"] Jan 21 17:13:04 crc kubenswrapper[4890]: I0121 17:13:04.807309 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hfvvl"] Jan 21 17:13:05 crc kubenswrapper[4890]: I0121 17:13:05.924305 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a26313-530a-4164-a274-a0dad3abff5d" path="/var/lib/kubelet/pods/10a26313-530a-4164-a274-a0dad3abff5d/volumes" Jan 21 17:13:14 crc kubenswrapper[4890]: I0121 17:13:14.914428 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:13:14 crc kubenswrapper[4890]: E0121 17:13:14.916080 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:13:25 crc kubenswrapper[4890]: I0121 17:13:25.913972 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:13:25 crc kubenswrapper[4890]: E0121 17:13:25.915033 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:13:36 crc kubenswrapper[4890]: I0121 17:13:36.914422 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:13:36 crc kubenswrapper[4890]: E0121 17:13:36.915299 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:13:49 crc kubenswrapper[4890]: I0121 17:13:49.914855 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:13:49 crc kubenswrapper[4890]: E0121 17:13:49.915787 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:14:03 crc kubenswrapper[4890]: I0121 17:14:03.919731 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:14:03 crc kubenswrapper[4890]: E0121 17:14:03.920446 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:14:17 crc kubenswrapper[4890]: I0121 17:14:17.920107 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:14:17 crc kubenswrapper[4890]: E0121 17:14:17.920639 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:14:27 crc kubenswrapper[4890]: I0121 17:14:27.060469 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-d7hbm"] Jan 21 17:14:27 crc kubenswrapper[4890]: I0121 17:14:27.071916 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0a87-account-create-update-hv2jn"] Jan 21 17:14:27 crc kubenswrapper[4890]: I0121 17:14:27.081831 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-d7hbm"] Jan 21 17:14:27 crc kubenswrapper[4890]: I0121 17:14:27.089337 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0a87-account-create-update-hv2jn"] Jan 21 17:14:27 crc kubenswrapper[4890]: I0121 17:14:27.931154 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01496eb9-e1e6-45fc-872e-63b8be1baec4" path="/var/lib/kubelet/pods/01496eb9-e1e6-45fc-872e-63b8be1baec4/volumes" Jan 21 17:14:27 crc kubenswrapper[4890]: I0121 17:14:27.931956 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10501a22-53ef-4e70-9746-bea547a51fac" path="/var/lib/kubelet/pods/10501a22-53ef-4e70-9746-bea547a51fac/volumes" Jan 21 17:14:29 crc kubenswrapper[4890]: I0121 17:14:29.916945 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:14:29 crc kubenswrapper[4890]: E0121 17:14:29.917444 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:14:34 crc kubenswrapper[4890]: I0121 17:14:34.045770 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-7m788"] Jan 21 17:14:34 crc kubenswrapper[4890]: I0121 17:14:34.055851 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-7m788"] Jan 21 17:14:35 crc kubenswrapper[4890]: I0121 17:14:35.928722 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac707741-3e2a-4e3b-91d9-4506d55b585f" path="/var/lib/kubelet/pods/ac707741-3e2a-4e3b-91d9-4506d55b585f/volumes" Jan 21 17:14:44 crc kubenswrapper[4890]: I0121 17:14:44.914724 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:14:44 crc kubenswrapper[4890]: E0121 17:14:44.915957 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:14:48 crc kubenswrapper[4890]: I0121 17:14:48.054947 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rwhlx"] Jan 21 17:14:48 crc kubenswrapper[4890]: I0121 17:14:48.060282 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rwhlx"] Jan 21 17:14:49 crc kubenswrapper[4890]: I0121 17:14:49.925148 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a437a071-861b-41f4-b78f-b2b5775d464f" path="/var/lib/kubelet/pods/a437a071-861b-41f4-b78f-b2b5775d464f/volumes" Jan 21 17:14:55 crc kubenswrapper[4890]: I0121 17:14:55.914902 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:14:55 crc kubenswrapper[4890]: E0121 17:14:55.915505 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.173112 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s"] Jan 21 17:15:00 crc kubenswrapper[4890]: E0121 17:15:00.174253 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.174276 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4890]: E0121 17:15:00.174303 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.174313 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4890]: E0121 17:15:00.174335 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.174346 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.176345 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a26313-530a-4164-a274-a0dad3abff5d" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.177328 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.181631 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.184039 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s"] Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.193546 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.300490 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56lx\" (UniqueName: \"kubernetes.io/projected/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-kube-api-access-r56lx\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.300542 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-secret-volume\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.300706 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-config-volume\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.402130 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56lx\" (UniqueName: \"kubernetes.io/projected/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-kube-api-access-r56lx\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.402190 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-secret-volume\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.402296 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-config-volume\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.403346 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-config-volume\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.413545 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-secret-volume\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.421403 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56lx\" (UniqueName: \"kubernetes.io/projected/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-kube-api-access-r56lx\") pod \"collect-profiles-29483595-kdk8s\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.510112 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:00 crc kubenswrapper[4890]: I0121 17:15:00.941066 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s"] Jan 21 17:15:01 crc kubenswrapper[4890]: I0121 17:15:01.515411 4890 generic.go:334] "Generic (PLEG): container finished" podID="76e33f8d-88ab-4b7b-aef6-927ebcf05ac8" containerID="87c886852337c8f39decf31b4581dd2b0e200ac6176ec1747c42f4c7335a0b10" exitCode=0 Jan 21 17:15:01 crc kubenswrapper[4890]: I0121 17:15:01.515468 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" event={"ID":"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8","Type":"ContainerDied","Data":"87c886852337c8f39decf31b4581dd2b0e200ac6176ec1747c42f4c7335a0b10"} Jan 21 17:15:01 crc kubenswrapper[4890]: I0121 17:15:01.516656 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" event={"ID":"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8","Type":"ContainerStarted","Data":"a1ebd9cbf3d10e25db4d29f1b02745a5818d9d03a5b76733310d30a31bde6b8d"} Jan 21 17:15:02 crc kubenswrapper[4890]: I0121 17:15:02.808825 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:02 crc kubenswrapper[4890]: I0121 17:15:02.954223 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r56lx\" (UniqueName: \"kubernetes.io/projected/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-kube-api-access-r56lx\") pod \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " Jan 21 17:15:02 crc kubenswrapper[4890]: I0121 17:15:02.954296 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-config-volume\") pod \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " Jan 21 17:15:02 crc kubenswrapper[4890]: I0121 17:15:02.954335 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-secret-volume\") pod \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\" (UID: \"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8\") " Jan 21 17:15:02 crc kubenswrapper[4890]: I0121 17:15:02.955335 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-config-volume" (OuterVolumeSpecName: "config-volume") pod "76e33f8d-88ab-4b7b-aef6-927ebcf05ac8" (UID: "76e33f8d-88ab-4b7b-aef6-927ebcf05ac8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:15:02 crc kubenswrapper[4890]: I0121 17:15:02.959831 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-kube-api-access-r56lx" (OuterVolumeSpecName: "kube-api-access-r56lx") pod "76e33f8d-88ab-4b7b-aef6-927ebcf05ac8" (UID: "76e33f8d-88ab-4b7b-aef6-927ebcf05ac8"). InnerVolumeSpecName "kube-api-access-r56lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:15:02 crc kubenswrapper[4890]: I0121 17:15:02.978484 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76e33f8d-88ab-4b7b-aef6-927ebcf05ac8" (UID: "76e33f8d-88ab-4b7b-aef6-927ebcf05ac8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.057449 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r56lx\" (UniqueName: \"kubernetes.io/projected/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-kube-api-access-r56lx\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.057481 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.057490 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e33f8d-88ab-4b7b-aef6-927ebcf05ac8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.534031 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" event={"ID":"76e33f8d-88ab-4b7b-aef6-927ebcf05ac8","Type":"ContainerDied","Data":"a1ebd9cbf3d10e25db4d29f1b02745a5818d9d03a5b76733310d30a31bde6b8d"} Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.534783 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ebd9cbf3d10e25db4d29f1b02745a5818d9d03a5b76733310d30a31bde6b8d" Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.534116 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-kdk8s" Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.875553 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666"] Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.883474 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-w6666"] Jan 21 17:15:03 crc kubenswrapper[4890]: I0121 17:15:03.924156 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3206ab55-f23f-439b-af10-6fbadd2548f5" path="/var/lib/kubelet/pods/3206ab55-f23f-439b-af10-6fbadd2548f5/volumes" Jan 21 17:15:10 crc kubenswrapper[4890]: I0121 17:15:10.914937 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:15:10 crc kubenswrapper[4890]: E0121 17:15:10.915473 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:15:19 crc kubenswrapper[4890]: I0121 17:15:19.316805 4890 scope.go:117] "RemoveContainer" containerID="96ce91110b466c157ba748f9a21d049e2dfa62a5ebc4c1d3a6300192670d7a1e" Jan 21 17:15:19 crc kubenswrapper[4890]: I0121 17:15:19.337648 4890 scope.go:117] "RemoveContainer" containerID="533ec7e9078c2460b2a603e0fb9b71a684635477497df281fc5620a685e81ec1" Jan 21 17:15:19 crc kubenswrapper[4890]: I0121 17:15:19.399493 4890 scope.go:117] "RemoveContainer" containerID="06cafcee89ef66240bf9d9fbe33af663eac9c9cdbf6401edb79113bc19f2bdd1" Jan 21 17:15:19 crc kubenswrapper[4890]: I0121 17:15:19.424930 4890 scope.go:117] "RemoveContainer" containerID="75f4fd6885bfeef646601242b4a5c6ce2476d3e8dda2fccab05d78a0b71bc154" Jan 21 17:15:19 crc kubenswrapper[4890]: I0121 17:15:19.452394 4890 scope.go:117] "RemoveContainer" containerID="1d1d79f2eb8203ed88d882d14aa14010923b9782c447c1b680f9016930ca7cd4" Jan 21 17:15:19 crc kubenswrapper[4890]: I0121 17:15:19.492863 4890 scope.go:117] "RemoveContainer" containerID="8c3c7cf59cd6d9ac36223ec798e5cc1e0da4af21ed1296b165ded5455aa55e36" Jan 21 17:15:25 crc kubenswrapper[4890]: I0121 17:15:25.914429 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:15:25 crc kubenswrapper[4890]: E0121 17:15:25.915395 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:15:39 crc kubenswrapper[4890]: I0121 17:15:39.918948 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:15:39 crc kubenswrapper[4890]: E0121 17:15:39.919753 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:15:52 crc kubenswrapper[4890]: I0121 17:15:52.016012 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:15:52 crc kubenswrapper[4890]: E0121 17:15:52.016892 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:16:03 crc kubenswrapper[4890]: I0121 17:16:03.914782 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:16:03 crc kubenswrapper[4890]: E0121 17:16:03.915594 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:16:16 crc kubenswrapper[4890]: I0121 17:16:16.914253 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:16:16 crc kubenswrapper[4890]: E0121 17:16:16.915063 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:16:30 crc kubenswrapper[4890]: I0121 17:16:30.914217 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:16:30 crc kubenswrapper[4890]: E0121 17:16:30.915809 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:16:41 crc kubenswrapper[4890]: I0121 17:16:41.914840 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:16:41 crc kubenswrapper[4890]: E0121 17:16:41.915675 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:16:53 crc kubenswrapper[4890]: I0121 17:16:53.915205 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:16:54 crc kubenswrapper[4890]: I0121 17:16:54.466388 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"971767d0e4f9a5c99c0f44a90da2c7ca1d39e0ef2e53e3a5ed2e37b428f86aaf"} Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.248417 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2p27w"] Jan 21 17:18:26 crc kubenswrapper[4890]: E0121 17:18:26.249468 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e33f8d-88ab-4b7b-aef6-927ebcf05ac8" containerName="collect-profiles" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.249490 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e33f8d-88ab-4b7b-aef6-927ebcf05ac8" containerName="collect-profiles" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.249778 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e33f8d-88ab-4b7b-aef6-927ebcf05ac8" containerName="collect-profiles" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.252637 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.275846 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2p27w"] Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.406251 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-catalog-content\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.406313 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhq6z\" (UniqueName: \"kubernetes.io/projected/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-kube-api-access-hhq6z\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.406342 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-utilities\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.508148 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-catalog-content\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.508195 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhq6z\" (UniqueName: \"kubernetes.io/projected/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-kube-api-access-hhq6z\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.508221 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-utilities\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.508676 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-utilities\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.508906 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-catalog-content\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.539426 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhq6z\" (UniqueName: \"kubernetes.io/projected/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-kube-api-access-hhq6z\") pod \"redhat-operators-2p27w\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:26 crc kubenswrapper[4890]: I0121 17:18:26.587016 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:27 crc kubenswrapper[4890]: I0121 17:18:27.033760 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2p27w"] Jan 21 17:18:27 crc kubenswrapper[4890]: I0121 17:18:27.294461 4890 generic.go:334] "Generic (PLEG): container finished" podID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerID="52c51c7319e2b1711e62fe0640a718d455839ce8d3a16950ecaa3dd94179a66a" exitCode=0 Jan 21 17:18:27 crc kubenswrapper[4890]: I0121 17:18:27.294520 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2p27w" event={"ID":"5e8f6329-74d0-4ac4-abcd-1f7d47096b81","Type":"ContainerDied","Data":"52c51c7319e2b1711e62fe0640a718d455839ce8d3a16950ecaa3dd94179a66a"} Jan 21 17:18:27 crc kubenswrapper[4890]: I0121 17:18:27.294553 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2p27w" event={"ID":"5e8f6329-74d0-4ac4-abcd-1f7d47096b81","Type":"ContainerStarted","Data":"2422ac48dab6bf87fc27949078c230e2ddbc1c8eb4d0e56455467f4efb52be5d"} Jan 21 17:18:27 crc kubenswrapper[4890]: I0121 17:18:27.298655 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:18:28 crc kubenswrapper[4890]: I0121 17:18:28.302725 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2p27w" event={"ID":"5e8f6329-74d0-4ac4-abcd-1f7d47096b81","Type":"ContainerStarted","Data":"3e17294a546fb87dbdf0226bbbdf68302e27c5666bd524c7b075e6fa249182d6"} Jan 21 17:18:29 crc kubenswrapper[4890]: I0121 17:18:29.316088 4890 generic.go:334] "Generic (PLEG): container finished" podID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerID="3e17294a546fb87dbdf0226bbbdf68302e27c5666bd524c7b075e6fa249182d6" exitCode=0 Jan 21 17:18:29 crc kubenswrapper[4890]: I0121 17:18:29.316408 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2p27w" event={"ID":"5e8f6329-74d0-4ac4-abcd-1f7d47096b81","Type":"ContainerDied","Data":"3e17294a546fb87dbdf0226bbbdf68302e27c5666bd524c7b075e6fa249182d6"} Jan 21 17:18:30 crc kubenswrapper[4890]: I0121 17:18:30.328309 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2p27w" event={"ID":"5e8f6329-74d0-4ac4-abcd-1f7d47096b81","Type":"ContainerStarted","Data":"f628fae5cb21ef4f3162b3feba6c91c8125503c700e947883ecccb7b14d0a4ff"} Jan 21 17:18:30 crc kubenswrapper[4890]: I0121 17:18:30.351255 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2p27w" podStartSLOduration=1.938728932 podStartE2EDuration="4.351232404s" podCreationTimestamp="2026-01-21 17:18:26 +0000 UTC" firstStartedPulling="2026-01-21 17:18:27.298390254 +0000 UTC m=+6389.659832663" lastFinishedPulling="2026-01-21 17:18:29.710893706 +0000 UTC m=+6392.072336135" observedRunningTime="2026-01-21 17:18:30.344980569 +0000 UTC m=+6392.706422988" watchObservedRunningTime="2026-01-21 17:18:30.351232404 +0000 UTC m=+6392.712674813" Jan 21 17:18:36 crc kubenswrapper[4890]: I0121 17:18:36.588042 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:36 crc kubenswrapper[4890]: I0121 17:18:36.588829 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:36 crc kubenswrapper[4890]: I0121 17:18:36.667022 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:37 crc kubenswrapper[4890]: I0121 17:18:37.429767 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:37 crc kubenswrapper[4890]: I0121 17:18:37.488090 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2p27w"] Jan 21 17:18:39 crc kubenswrapper[4890]: I0121 17:18:39.397134 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2p27w" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="registry-server" containerID="cri-o://f628fae5cb21ef4f3162b3feba6c91c8125503c700e947883ecccb7b14d0a4ff" gracePeriod=2 Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.419953 4890 generic.go:334] "Generic (PLEG): container finished" podID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerID="f628fae5cb21ef4f3162b3feba6c91c8125503c700e947883ecccb7b14d0a4ff" exitCode=0 Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.420060 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2p27w" event={"ID":"5e8f6329-74d0-4ac4-abcd-1f7d47096b81","Type":"ContainerDied","Data":"f628fae5cb21ef4f3162b3feba6c91c8125503c700e947883ecccb7b14d0a4ff"} Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.521779 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.663573 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-catalog-content\") pod \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.663690 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhq6z\" (UniqueName: \"kubernetes.io/projected/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-kube-api-access-hhq6z\") pod \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.663822 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-utilities\") pod \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\" (UID: \"5e8f6329-74d0-4ac4-abcd-1f7d47096b81\") " Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.664955 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-utilities" (OuterVolumeSpecName: "utilities") pod "5e8f6329-74d0-4ac4-abcd-1f7d47096b81" (UID: "5e8f6329-74d0-4ac4-abcd-1f7d47096b81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.665257 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.669393 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-kube-api-access-hhq6z" (OuterVolumeSpecName: "kube-api-access-hhq6z") pod "5e8f6329-74d0-4ac4-abcd-1f7d47096b81" (UID: "5e8f6329-74d0-4ac4-abcd-1f7d47096b81"). InnerVolumeSpecName "kube-api-access-hhq6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.767141 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhq6z\" (UniqueName: \"kubernetes.io/projected/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-kube-api-access-hhq6z\") on node \"crc\" DevicePath \"\"" Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.789598 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e8f6329-74d0-4ac4-abcd-1f7d47096b81" (UID: "5e8f6329-74d0-4ac4-abcd-1f7d47096b81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:18:42 crc kubenswrapper[4890]: I0121 17:18:42.868417 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8f6329-74d0-4ac4-abcd-1f7d47096b81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.432190 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2p27w" event={"ID":"5e8f6329-74d0-4ac4-abcd-1f7d47096b81","Type":"ContainerDied","Data":"2422ac48dab6bf87fc27949078c230e2ddbc1c8eb4d0e56455467f4efb52be5d"} Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.432527 4890 scope.go:117] "RemoveContainer" containerID="f628fae5cb21ef4f3162b3feba6c91c8125503c700e947883ecccb7b14d0a4ff" Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.432287 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2p27w" Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.462685 4890 scope.go:117] "RemoveContainer" containerID="3e17294a546fb87dbdf0226bbbdf68302e27c5666bd524c7b075e6fa249182d6" Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.502751 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2p27w"] Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.511198 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2p27w"] Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.520232 4890 scope.go:117] "RemoveContainer" containerID="52c51c7319e2b1711e62fe0640a718d455839ce8d3a16950ecaa3dd94179a66a" Jan 21 17:18:43 crc kubenswrapper[4890]: I0121 17:18:43.933795 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" path="/var/lib/kubelet/pods/5e8f6329-74d0-4ac4-abcd-1f7d47096b81/volumes" Jan 21 17:19:18 crc kubenswrapper[4890]: I0121 17:19:18.762307 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:19:18 crc kubenswrapper[4890]: I0121 17:19:18.762748 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:19:48 crc kubenswrapper[4890]: I0121 17:19:48.762096 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:19:48 crc kubenswrapper[4890]: I0121 17:19:48.762935 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.330884 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r25xz"] Jan 21 17:19:53 crc kubenswrapper[4890]: E0121 17:19:53.331784 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="registry-server" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.331797 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="registry-server" Jan 21 17:19:53 crc kubenswrapper[4890]: E0121 17:19:53.331812 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="extract-content" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.331818 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="extract-content" Jan 21 17:19:53 crc kubenswrapper[4890]: E0121 17:19:53.331829 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="extract-utilities" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.331835 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="extract-utilities" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.332032 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8f6329-74d0-4ac4-abcd-1f7d47096b81" containerName="registry-server" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.333229 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.342552 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r25xz"] Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.391799 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-catalog-content\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.391858 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgt8l\" (UniqueName: \"kubernetes.io/projected/3311457b-7645-4916-a4fe-d079c01aef9d-kube-api-access-bgt8l\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.391883 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-utilities\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.493599 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-catalog-content\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.493672 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgt8l\" (UniqueName: \"kubernetes.io/projected/3311457b-7645-4916-a4fe-d079c01aef9d-kube-api-access-bgt8l\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.493701 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-utilities\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.494294 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-utilities\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.494344 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-catalog-content\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.518316 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgt8l\" (UniqueName: \"kubernetes.io/projected/3311457b-7645-4916-a4fe-d079c01aef9d-kube-api-access-bgt8l\") pod \"redhat-marketplace-r25xz\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:53 crc kubenswrapper[4890]: I0121 17:19:53.652387 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:19:54 crc kubenswrapper[4890]: I0121 17:19:54.125899 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r25xz"] Jan 21 17:19:54 crc kubenswrapper[4890]: W0121 17:19:54.132398 4890 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3311457b_7645_4916_a4fe_d079c01aef9d.slice/crio-00ebbb8c94996b0bedd7cef0602d016a543777f07a63631933401a13330391d2 WatchSource:0}: Error finding container 00ebbb8c94996b0bedd7cef0602d016a543777f07a63631933401a13330391d2: Status 404 returned error can't find the container with id 00ebbb8c94996b0bedd7cef0602d016a543777f07a63631933401a13330391d2 Jan 21 17:19:54 crc kubenswrapper[4890]: I0121 17:19:54.976008 4890 generic.go:334] "Generic (PLEG): container finished" podID="3311457b-7645-4916-a4fe-d079c01aef9d" containerID="650c30484d9c31d6d21906bc73e1788a081ab322d8664285cf145f3bf83ab1b2" exitCode=0 Jan 21 17:19:54 crc kubenswrapper[4890]: I0121 17:19:54.976322 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r25xz" event={"ID":"3311457b-7645-4916-a4fe-d079c01aef9d","Type":"ContainerDied","Data":"650c30484d9c31d6d21906bc73e1788a081ab322d8664285cf145f3bf83ab1b2"} Jan 21 17:19:54 crc kubenswrapper[4890]: I0121 17:19:54.976364 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r25xz" event={"ID":"3311457b-7645-4916-a4fe-d079c01aef9d","Type":"ContainerStarted","Data":"00ebbb8c94996b0bedd7cef0602d016a543777f07a63631933401a13330391d2"} Jan 21 17:19:56 crc kubenswrapper[4890]: I0121 17:19:56.993685 4890 generic.go:334] "Generic (PLEG): container finished" podID="3311457b-7645-4916-a4fe-d079c01aef9d" containerID="7261f1efcdba2996e0df6b8a8705b0d1df819396a2bacaedf98fb1100be90379" exitCode=0 Jan 21 17:19:56 crc kubenswrapper[4890]: I0121 17:19:56.993774 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r25xz" event={"ID":"3311457b-7645-4916-a4fe-d079c01aef9d","Type":"ContainerDied","Data":"7261f1efcdba2996e0df6b8a8705b0d1df819396a2bacaedf98fb1100be90379"} Jan 21 17:19:58 crc kubenswrapper[4890]: I0121 17:19:58.003040 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r25xz" event={"ID":"3311457b-7645-4916-a4fe-d079c01aef9d","Type":"ContainerStarted","Data":"7fd14b1a62845b57382c7f1f8ed1add74fcab6eff3c7d7050feab6137a18dbf5"} Jan 21 17:19:58 crc kubenswrapper[4890]: I0121 17:19:58.028570 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r25xz" podStartSLOduration=2.584964546 podStartE2EDuration="5.028548429s" podCreationTimestamp="2026-01-21 17:19:53 +0000 UTC" firstStartedPulling="2026-01-21 17:19:54.977699079 +0000 UTC m=+6477.339141488" lastFinishedPulling="2026-01-21 17:19:57.421282962 +0000 UTC m=+6479.782725371" observedRunningTime="2026-01-21 17:19:58.023420862 +0000 UTC m=+6480.384863281" watchObservedRunningTime="2026-01-21 17:19:58.028548429 +0000 UTC m=+6480.389990848" Jan 21 17:20:03 crc kubenswrapper[4890]: I0121 17:20:03.653088 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:20:03 crc kubenswrapper[4890]: I0121 17:20:03.655482 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:20:03 crc kubenswrapper[4890]: I0121 17:20:03.715979 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:20:04 crc kubenswrapper[4890]: I0121 17:20:04.086255 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:20:04 crc kubenswrapper[4890]: I0121 17:20:04.134072 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r25xz"] Jan 21 17:20:06 crc kubenswrapper[4890]: I0121 17:20:06.061136 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r25xz" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="registry-server" containerID="cri-o://7fd14b1a62845b57382c7f1f8ed1add74fcab6eff3c7d7050feab6137a18dbf5" gracePeriod=2 Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.098717 4890 generic.go:334] "Generic (PLEG): container finished" podID="3311457b-7645-4916-a4fe-d079c01aef9d" containerID="7fd14b1a62845b57382c7f1f8ed1add74fcab6eff3c7d7050feab6137a18dbf5" exitCode=0 Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.098790 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r25xz" event={"ID":"3311457b-7645-4916-a4fe-d079c01aef9d","Type":"ContainerDied","Data":"7fd14b1a62845b57382c7f1f8ed1add74fcab6eff3c7d7050feab6137a18dbf5"} Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.248692 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.258040 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-utilities\") pod \"3311457b-7645-4916-a4fe-d079c01aef9d\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.258087 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgt8l\" (UniqueName: \"kubernetes.io/projected/3311457b-7645-4916-a4fe-d079c01aef9d-kube-api-access-bgt8l\") pod \"3311457b-7645-4916-a4fe-d079c01aef9d\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.258149 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-catalog-content\") pod \"3311457b-7645-4916-a4fe-d079c01aef9d\" (UID: \"3311457b-7645-4916-a4fe-d079c01aef9d\") " Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.261147 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-utilities" (OuterVolumeSpecName: "utilities") pod "3311457b-7645-4916-a4fe-d079c01aef9d" (UID: "3311457b-7645-4916-a4fe-d079c01aef9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.266676 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3311457b-7645-4916-a4fe-d079c01aef9d-kube-api-access-bgt8l" (OuterVolumeSpecName: "kube-api-access-bgt8l") pod "3311457b-7645-4916-a4fe-d079c01aef9d" (UID: "3311457b-7645-4916-a4fe-d079c01aef9d"). InnerVolumeSpecName "kube-api-access-bgt8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.306196 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3311457b-7645-4916-a4fe-d079c01aef9d" (UID: "3311457b-7645-4916-a4fe-d079c01aef9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.359977 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.360019 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3311457b-7645-4916-a4fe-d079c01aef9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:07 crc kubenswrapper[4890]: I0121 17:20:07.360029 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgt8l\" (UniqueName: \"kubernetes.io/projected/3311457b-7645-4916-a4fe-d079c01aef9d-kube-api-access-bgt8l\") on node \"crc\" DevicePath \"\"" Jan 21 17:20:08 crc kubenswrapper[4890]: I0121 17:20:08.110980 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r25xz" event={"ID":"3311457b-7645-4916-a4fe-d079c01aef9d","Type":"ContainerDied","Data":"00ebbb8c94996b0bedd7cef0602d016a543777f07a63631933401a13330391d2"} Jan 21 17:20:08 crc kubenswrapper[4890]: I0121 17:20:08.111035 4890 scope.go:117] "RemoveContainer" containerID="7fd14b1a62845b57382c7f1f8ed1add74fcab6eff3c7d7050feab6137a18dbf5" Jan 21 17:20:08 crc kubenswrapper[4890]: I0121 17:20:08.111198 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r25xz" Jan 21 17:20:08 crc kubenswrapper[4890]: I0121 17:20:08.133061 4890 scope.go:117] "RemoveContainer" containerID="7261f1efcdba2996e0df6b8a8705b0d1df819396a2bacaedf98fb1100be90379" Jan 21 17:20:08 crc kubenswrapper[4890]: I0121 17:20:08.136067 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r25xz"] Jan 21 17:20:08 crc kubenswrapper[4890]: I0121 17:20:08.142435 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r25xz"] Jan 21 17:20:08 crc kubenswrapper[4890]: I0121 17:20:08.155757 4890 scope.go:117] "RemoveContainer" containerID="650c30484d9c31d6d21906bc73e1788a081ab322d8664285cf145f3bf83ab1b2" Jan 21 17:20:09 crc kubenswrapper[4890]: I0121 17:20:09.926312 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" path="/var/lib/kubelet/pods/3311457b-7645-4916-a4fe-d079c01aef9d/volumes" Jan 21 17:20:18 crc kubenswrapper[4890]: I0121 17:20:18.761919 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:20:18 crc kubenswrapper[4890]: I0121 17:20:18.762450 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:20:18 crc kubenswrapper[4890]: I0121 17:20:18.762508 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 17:20:18 crc kubenswrapper[4890]: I0121 17:20:18.763280 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"971767d0e4f9a5c99c0f44a90da2c7ca1d39e0ef2e53e3a5ed2e37b428f86aaf"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:20:18 crc kubenswrapper[4890]: I0121 17:20:18.763339 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://971767d0e4f9a5c99c0f44a90da2c7ca1d39e0ef2e53e3a5ed2e37b428f86aaf" gracePeriod=600 Jan 21 17:20:19 crc kubenswrapper[4890]: I0121 17:20:19.211900 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="971767d0e4f9a5c99c0f44a90da2c7ca1d39e0ef2e53e3a5ed2e37b428f86aaf" exitCode=0 Jan 21 17:20:19 crc kubenswrapper[4890]: I0121 17:20:19.211987 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"971767d0e4f9a5c99c0f44a90da2c7ca1d39e0ef2e53e3a5ed2e37b428f86aaf"} Jan 21 17:20:19 crc kubenswrapper[4890]: I0121 17:20:19.212534 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb"} Jan 21 17:20:19 crc kubenswrapper[4890]: I0121 17:20:19.212561 4890 scope.go:117] "RemoveContainer" containerID="1d576b1efc86d934f45fd0bbe470b3933350d0cd47ac9cec5531789408521537" Jan 21 17:22:48 crc kubenswrapper[4890]: I0121 17:22:48.762536 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:48 crc kubenswrapper[4890]: I0121 17:22:48.763071 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:18 crc kubenswrapper[4890]: I0121 17:23:18.762886 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:23:18 crc kubenswrapper[4890]: I0121 17:23:18.763642 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.762622 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.763181 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.763248 4890 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.764066 4890 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb"} pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.764137 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" containerID="cri-o://2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" gracePeriod=600 Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.950046 4890 generic.go:334] "Generic (PLEG): container finished" podID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" exitCode=0 Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.950097 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerDied","Data":"2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb"} Jan 21 17:23:48 crc kubenswrapper[4890]: I0121 17:23:48.950166 4890 scope.go:117] "RemoveContainer" containerID="971767d0e4f9a5c99c0f44a90da2c7ca1d39e0ef2e53e3a5ed2e37b428f86aaf" Jan 21 17:23:49 crc kubenswrapper[4890]: E0121 17:23:49.947967 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:23:49 crc kubenswrapper[4890]: I0121 17:23:49.963879 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:23:49 crc kubenswrapper[4890]: E0121 17:23:49.964805 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:24:00 crc kubenswrapper[4890]: I0121 17:24:00.915275 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:24:00 crc kubenswrapper[4890]: E0121 17:24:00.916303 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.458066 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2xfcj"] Jan 21 17:24:08 crc kubenswrapper[4890]: E0121 17:24:08.459092 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="extract-content" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.459114 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="extract-content" Jan 21 17:24:08 crc kubenswrapper[4890]: E0121 17:24:08.459147 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="registry-server" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.459160 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="registry-server" Jan 21 17:24:08 crc kubenswrapper[4890]: E0121 17:24:08.459189 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="extract-utilities" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.459201 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="extract-utilities" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.459530 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="3311457b-7645-4916-a4fe-d079c01aef9d" containerName="registry-server" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.461956 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.473137 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xfcj"] Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.641229 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-catalog-content\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.641578 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-utilities\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.641603 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpdjl\" (UniqueName: \"kubernetes.io/projected/f741b4c1-f270-4090-972c-39ac440726e0-kube-api-access-bpdjl\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.742753 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-catalog-content\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.742815 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-utilities\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.742833 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpdjl\" (UniqueName: \"kubernetes.io/projected/f741b4c1-f270-4090-972c-39ac440726e0-kube-api-access-bpdjl\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.743731 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-catalog-content\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.743892 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-utilities\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.773991 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpdjl\" (UniqueName: \"kubernetes.io/projected/f741b4c1-f270-4090-972c-39ac440726e0-kube-api-access-bpdjl\") pod \"community-operators-2xfcj\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:08 crc kubenswrapper[4890]: I0121 17:24:08.803236 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:09 crc kubenswrapper[4890]: I0121 17:24:09.317842 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xfcj"] Jan 21 17:24:10 crc kubenswrapper[4890]: I0121 17:24:10.132809 4890 generic.go:334] "Generic (PLEG): container finished" podID="f741b4c1-f270-4090-972c-39ac440726e0" containerID="661f3c7d0b559ddd727265a20ab05df27a0642aa6f877e57c800923458d23adf" exitCode=0 Jan 21 17:24:10 crc kubenswrapper[4890]: I0121 17:24:10.132909 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xfcj" event={"ID":"f741b4c1-f270-4090-972c-39ac440726e0","Type":"ContainerDied","Data":"661f3c7d0b559ddd727265a20ab05df27a0642aa6f877e57c800923458d23adf"} Jan 21 17:24:10 crc kubenswrapper[4890]: I0121 17:24:10.133142 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xfcj" event={"ID":"f741b4c1-f270-4090-972c-39ac440726e0","Type":"ContainerStarted","Data":"db6a7f36630450aad36af9413c5928296ce45179719f21cfa4ce57927c07b706"} Jan 21 17:24:10 crc kubenswrapper[4890]: I0121 17:24:10.134341 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:24:11 crc kubenswrapper[4890]: I0121 17:24:11.140522 4890 generic.go:334] "Generic (PLEG): container finished" podID="f741b4c1-f270-4090-972c-39ac440726e0" containerID="fece1bb6d47feee487a30e62279bc49eac56084ed14add4ec19c2ae9e46c8f09" exitCode=0 Jan 21 17:24:11 crc kubenswrapper[4890]: I0121 17:24:11.140717 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xfcj" event={"ID":"f741b4c1-f270-4090-972c-39ac440726e0","Type":"ContainerDied","Data":"fece1bb6d47feee487a30e62279bc49eac56084ed14add4ec19c2ae9e46c8f09"} Jan 21 17:24:12 crc kubenswrapper[4890]: I0121 17:24:12.149380 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xfcj" event={"ID":"f741b4c1-f270-4090-972c-39ac440726e0","Type":"ContainerStarted","Data":"b251c40232f1750cf2c4697a2023295b5f5be91b8a86a304d3a078008efa3c64"} Jan 21 17:24:12 crc kubenswrapper[4890]: I0121 17:24:12.168655 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2xfcj" podStartSLOduration=2.477779764 podStartE2EDuration="4.168635825s" podCreationTimestamp="2026-01-21 17:24:08 +0000 UTC" firstStartedPulling="2026-01-21 17:24:10.134057039 +0000 UTC m=+6732.495499458" lastFinishedPulling="2026-01-21 17:24:11.82491311 +0000 UTC m=+6734.186355519" observedRunningTime="2026-01-21 17:24:12.165893447 +0000 UTC m=+6734.527335856" watchObservedRunningTime="2026-01-21 17:24:12.168635825 +0000 UTC m=+6734.530078234" Jan 21 17:24:13 crc kubenswrapper[4890]: I0121 17:24:13.915006 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:24:13 crc kubenswrapper[4890]: E0121 17:24:13.915570 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:24:18 crc kubenswrapper[4890]: I0121 17:24:18.806579 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:18 crc kubenswrapper[4890]: I0121 17:24:18.806953 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:18 crc kubenswrapper[4890]: I0121 17:24:18.860824 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:19 crc kubenswrapper[4890]: I0121 17:24:19.239978 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:19 crc kubenswrapper[4890]: I0121 17:24:19.293582 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2xfcj"] Jan 21 17:24:21 crc kubenswrapper[4890]: I0121 17:24:21.209141 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2xfcj" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="registry-server" containerID="cri-o://b251c40232f1750cf2c4697a2023295b5f5be91b8a86a304d3a078008efa3c64" gracePeriod=2 Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.239479 4890 generic.go:334] "Generic (PLEG): container finished" podID="f741b4c1-f270-4090-972c-39ac440726e0" containerID="b251c40232f1750cf2c4697a2023295b5f5be91b8a86a304d3a078008efa3c64" exitCode=0 Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.239816 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xfcj" event={"ID":"f741b4c1-f270-4090-972c-39ac440726e0","Type":"ContainerDied","Data":"b251c40232f1750cf2c4697a2023295b5f5be91b8a86a304d3a078008efa3c64"} Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.502903 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.601935 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-catalog-content\") pod \"f741b4c1-f270-4090-972c-39ac440726e0\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.602011 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-utilities\") pod \"f741b4c1-f270-4090-972c-39ac440726e0\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.602056 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpdjl\" (UniqueName: \"kubernetes.io/projected/f741b4c1-f270-4090-972c-39ac440726e0-kube-api-access-bpdjl\") pod \"f741b4c1-f270-4090-972c-39ac440726e0\" (UID: \"f741b4c1-f270-4090-972c-39ac440726e0\") " Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.604573 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-utilities" (OuterVolumeSpecName: "utilities") pod "f741b4c1-f270-4090-972c-39ac440726e0" (UID: "f741b4c1-f270-4090-972c-39ac440726e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.608583 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f741b4c1-f270-4090-972c-39ac440726e0-kube-api-access-bpdjl" (OuterVolumeSpecName: "kube-api-access-bpdjl") pod "f741b4c1-f270-4090-972c-39ac440726e0" (UID: "f741b4c1-f270-4090-972c-39ac440726e0"). InnerVolumeSpecName "kube-api-access-bpdjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.662030 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f741b4c1-f270-4090-972c-39ac440726e0" (UID: "f741b4c1-f270-4090-972c-39ac440726e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.703534 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.703576 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f741b4c1-f270-4090-972c-39ac440726e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:23 crc kubenswrapper[4890]: I0121 17:24:23.703586 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpdjl\" (UniqueName: \"kubernetes.io/projected/f741b4c1-f270-4090-972c-39ac440726e0-kube-api-access-bpdjl\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:24 crc kubenswrapper[4890]: I0121 17:24:24.251795 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xfcj" event={"ID":"f741b4c1-f270-4090-972c-39ac440726e0","Type":"ContainerDied","Data":"db6a7f36630450aad36af9413c5928296ce45179719f21cfa4ce57927c07b706"} Jan 21 17:24:24 crc kubenswrapper[4890]: I0121 17:24:24.252196 4890 scope.go:117] "RemoveContainer" containerID="b251c40232f1750cf2c4697a2023295b5f5be91b8a86a304d3a078008efa3c64" Jan 21 17:24:24 crc kubenswrapper[4890]: I0121 17:24:24.251858 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xfcj" Jan 21 17:24:24 crc kubenswrapper[4890]: I0121 17:24:24.277955 4890 scope.go:117] "RemoveContainer" containerID="fece1bb6d47feee487a30e62279bc49eac56084ed14add4ec19c2ae9e46c8f09" Jan 21 17:24:24 crc kubenswrapper[4890]: I0121 17:24:24.280498 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2xfcj"] Jan 21 17:24:24 crc kubenswrapper[4890]: I0121 17:24:24.288882 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2xfcj"] Jan 21 17:24:24 crc kubenswrapper[4890]: I0121 17:24:24.298590 4890 scope.go:117] "RemoveContainer" containerID="661f3c7d0b559ddd727265a20ab05df27a0642aa6f877e57c800923458d23adf" Jan 21 17:24:25 crc kubenswrapper[4890]: I0121 17:24:25.923466 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f741b4c1-f270-4090-972c-39ac440726e0" path="/var/lib/kubelet/pods/f741b4c1-f270-4090-972c-39ac440726e0/volumes" Jan 21 17:24:28 crc kubenswrapper[4890]: I0121 17:24:28.914485 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:24:28 crc kubenswrapper[4890]: E0121 17:24:28.914941 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:24:40 crc kubenswrapper[4890]: I0121 17:24:40.915662 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:24:40 crc kubenswrapper[4890]: E0121 17:24:40.916457 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:24:55 crc kubenswrapper[4890]: I0121 17:24:55.914568 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:24:55 crc kubenswrapper[4890]: E0121 17:24:55.915298 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:25:09 crc kubenswrapper[4890]: I0121 17:25:09.915818 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:25:09 crc kubenswrapper[4890]: E0121 17:25:09.916761 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.345079 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fcdv8"] Jan 21 17:25:14 crc kubenswrapper[4890]: E0121 17:25:14.363629 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="extract-utilities" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.363678 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="extract-utilities" Jan 21 17:25:14 crc kubenswrapper[4890]: E0121 17:25:14.363741 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="extract-content" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.363755 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="extract-content" Jan 21 17:25:14 crc kubenswrapper[4890]: E0121 17:25:14.363819 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="registry-server" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.363832 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="registry-server" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.364567 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="f741b4c1-f270-4090-972c-39ac440726e0" containerName="registry-server" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.370999 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.392719 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fcdv8"] Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.558981 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9np\" (UniqueName: \"kubernetes.io/projected/54f430aa-d0aa-42ee-aff1-0f8845901dcb-kube-api-access-fx9np\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.559036 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-utilities\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.559128 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-catalog-content\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.661751 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx9np\" (UniqueName: \"kubernetes.io/projected/54f430aa-d0aa-42ee-aff1-0f8845901dcb-kube-api-access-fx9np\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.661830 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-utilities\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.661902 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-catalog-content\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.662576 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-utilities\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.662724 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-catalog-content\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.697161 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx9np\" (UniqueName: \"kubernetes.io/projected/54f430aa-d0aa-42ee-aff1-0f8845901dcb-kube-api-access-fx9np\") pod \"certified-operators-fcdv8\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:14 crc kubenswrapper[4890]: I0121 17:25:14.709155 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:15 crc kubenswrapper[4890]: I0121 17:25:15.209745 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fcdv8"] Jan 21 17:25:15 crc kubenswrapper[4890]: I0121 17:25:15.728801 4890 generic.go:334] "Generic (PLEG): container finished" podID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerID="b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c" exitCode=0 Jan 21 17:25:15 crc kubenswrapper[4890]: I0121 17:25:15.728878 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcdv8" event={"ID":"54f430aa-d0aa-42ee-aff1-0f8845901dcb","Type":"ContainerDied","Data":"b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c"} Jan 21 17:25:15 crc kubenswrapper[4890]: I0121 17:25:15.728922 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcdv8" event={"ID":"54f430aa-d0aa-42ee-aff1-0f8845901dcb","Type":"ContainerStarted","Data":"65014eef7f8c4173819f74d61fd297326703f08b7b82b6e0d39c92c866e4d03d"} Jan 21 17:25:17 crc kubenswrapper[4890]: I0121 17:25:17.748841 4890 generic.go:334] "Generic (PLEG): container finished" podID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerID="06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83" exitCode=0 Jan 21 17:25:17 crc kubenswrapper[4890]: I0121 17:25:17.748930 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcdv8" event={"ID":"54f430aa-d0aa-42ee-aff1-0f8845901dcb","Type":"ContainerDied","Data":"06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83"} Jan 21 17:25:18 crc kubenswrapper[4890]: I0121 17:25:18.769567 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcdv8" event={"ID":"54f430aa-d0aa-42ee-aff1-0f8845901dcb","Type":"ContainerStarted","Data":"fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6"} Jan 21 17:25:18 crc kubenswrapper[4890]: I0121 17:25:18.797265 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fcdv8" podStartSLOduration=2.389568401 podStartE2EDuration="4.797246961s" podCreationTimestamp="2026-01-21 17:25:14 +0000 UTC" firstStartedPulling="2026-01-21 17:25:15.73161235 +0000 UTC m=+6798.093054789" lastFinishedPulling="2026-01-21 17:25:18.13929094 +0000 UTC m=+6800.500733349" observedRunningTime="2026-01-21 17:25:18.795558919 +0000 UTC m=+6801.157001348" watchObservedRunningTime="2026-01-21 17:25:18.797246961 +0000 UTC m=+6801.158689380" Jan 21 17:25:24 crc kubenswrapper[4890]: I0121 17:25:24.710392 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:24 crc kubenswrapper[4890]: I0121 17:25:24.711145 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:24 crc kubenswrapper[4890]: I0121 17:25:24.807824 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:24 crc kubenswrapper[4890]: I0121 17:25:24.904025 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:24 crc kubenswrapper[4890]: I0121 17:25:24.914825 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:25:24 crc kubenswrapper[4890]: E0121 17:25:24.915052 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:25:25 crc kubenswrapper[4890]: I0121 17:25:25.052454 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fcdv8"] Jan 21 17:25:26 crc kubenswrapper[4890]: I0121 17:25:26.838382 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fcdv8" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="registry-server" containerID="cri-o://fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6" gracePeriod=2 Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.416373 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.545291 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-catalog-content\") pod \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.545429 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx9np\" (UniqueName: \"kubernetes.io/projected/54f430aa-d0aa-42ee-aff1-0f8845901dcb-kube-api-access-fx9np\") pod \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.545479 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-utilities\") pod \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\" (UID: \"54f430aa-d0aa-42ee-aff1-0f8845901dcb\") " Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.546927 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-utilities" (OuterVolumeSpecName: "utilities") pod "54f430aa-d0aa-42ee-aff1-0f8845901dcb" (UID: "54f430aa-d0aa-42ee-aff1-0f8845901dcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.563091 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f430aa-d0aa-42ee-aff1-0f8845901dcb-kube-api-access-fx9np" (OuterVolumeSpecName: "kube-api-access-fx9np") pod "54f430aa-d0aa-42ee-aff1-0f8845901dcb" (UID: "54f430aa-d0aa-42ee-aff1-0f8845901dcb"). InnerVolumeSpecName "kube-api-access-fx9np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.597038 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54f430aa-d0aa-42ee-aff1-0f8845901dcb" (UID: "54f430aa-d0aa-42ee-aff1-0f8845901dcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.648170 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.648214 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx9np\" (UniqueName: \"kubernetes.io/projected/54f430aa-d0aa-42ee-aff1-0f8845901dcb-kube-api-access-fx9np\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.648227 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f430aa-d0aa-42ee-aff1-0f8845901dcb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.854512 4890 generic.go:334] "Generic (PLEG): container finished" podID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerID="fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6" exitCode=0 Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.854592 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcdv8" event={"ID":"54f430aa-d0aa-42ee-aff1-0f8845901dcb","Type":"ContainerDied","Data":"fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6"} Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.855711 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcdv8" event={"ID":"54f430aa-d0aa-42ee-aff1-0f8845901dcb","Type":"ContainerDied","Data":"65014eef7f8c4173819f74d61fd297326703f08b7b82b6e0d39c92c866e4d03d"} Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.854603 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcdv8" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.855733 4890 scope.go:117] "RemoveContainer" containerID="fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.893380 4890 scope.go:117] "RemoveContainer" containerID="06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.899689 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fcdv8"] Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.908559 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fcdv8"] Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.922540 4890 scope.go:117] "RemoveContainer" containerID="b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.975637 4890 scope.go:117] "RemoveContainer" containerID="fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6" Jan 21 17:25:28 crc kubenswrapper[4890]: E0121 17:25:28.976143 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6\": container with ID starting with fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6 not found: ID does not exist" containerID="fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.976181 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6"} err="failed to get container status \"fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6\": rpc error: code = NotFound desc = could not find container \"fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6\": container with ID starting with fae0f8d1dc36e7307f209c79d1596d303205460cc747ffe6e586c79415e8e3c6 not found: ID does not exist" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.976207 4890 scope.go:117] "RemoveContainer" containerID="06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83" Jan 21 17:25:28 crc kubenswrapper[4890]: E0121 17:25:28.976602 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83\": container with ID starting with 06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83 not found: ID does not exist" containerID="06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.976645 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83"} err="failed to get container status \"06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83\": rpc error: code = NotFound desc = could not find container \"06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83\": container with ID starting with 06bbc4cb86092cf8e428f36dcaa6aea7ae6763f8b2f3a396d8d10b94efb89d83 not found: ID does not exist" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.976676 4890 scope.go:117] "RemoveContainer" containerID="b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c" Jan 21 17:25:28 crc kubenswrapper[4890]: E0121 17:25:28.977170 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c\": container with ID starting with b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c not found: ID does not exist" containerID="b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c" Jan 21 17:25:28 crc kubenswrapper[4890]: I0121 17:25:28.977215 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c"} err="failed to get container status \"b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c\": rpc error: code = NotFound desc = could not find container \"b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c\": container with ID starting with b9a2d18e5f4ad977d67227e809abacd5bda033f8341a69319714ef867fe5bc2c not found: ID does not exist" Jan 21 17:25:29 crc kubenswrapper[4890]: I0121 17:25:29.931313 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" path="/var/lib/kubelet/pods/54f430aa-d0aa-42ee-aff1-0f8845901dcb/volumes" Jan 21 17:25:37 crc kubenswrapper[4890]: I0121 17:25:37.920570 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:25:37 crc kubenswrapper[4890]: E0121 17:25:37.921165 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:25:52 crc kubenswrapper[4890]: I0121 17:25:52.914600 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:25:52 crc kubenswrapper[4890]: E0121 17:25:52.915799 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:26:05 crc kubenswrapper[4890]: I0121 17:26:05.915891 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:26:05 crc kubenswrapper[4890]: E0121 17:26:05.917393 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:26:18 crc kubenswrapper[4890]: I0121 17:26:18.913834 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:26:18 crc kubenswrapper[4890]: E0121 17:26:18.914662 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:26:31 crc kubenswrapper[4890]: I0121 17:26:31.914441 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:26:31 crc kubenswrapper[4890]: E0121 17:26:31.915154 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:26:46 crc kubenswrapper[4890]: I0121 17:26:46.914430 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:26:46 crc kubenswrapper[4890]: E0121 17:26:46.915584 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:26:57 crc kubenswrapper[4890]: I0121 17:26:57.921077 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:26:57 crc kubenswrapper[4890]: E0121 17:26:57.922111 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:27:12 crc kubenswrapper[4890]: I0121 17:27:12.913865 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:27:12 crc kubenswrapper[4890]: E0121 17:27:12.914630 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:27:26 crc kubenswrapper[4890]: I0121 17:27:26.914674 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:27:26 crc kubenswrapper[4890]: E0121 17:27:26.915686 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:27:39 crc kubenswrapper[4890]: I0121 17:27:39.925250 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:27:39 crc kubenswrapper[4890]: E0121 17:27:39.929784 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:27:50 crc kubenswrapper[4890]: I0121 17:27:50.915726 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:27:50 crc kubenswrapper[4890]: E0121 17:27:50.916609 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:28:04 crc kubenswrapper[4890]: I0121 17:28:04.913958 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:28:04 crc kubenswrapper[4890]: E0121 17:28:04.914915 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:28:17 crc kubenswrapper[4890]: I0121 17:28:17.919970 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:28:17 crc kubenswrapper[4890]: E0121 17:28:17.922000 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:28:29 crc kubenswrapper[4890]: I0121 17:28:29.918020 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:28:29 crc kubenswrapper[4890]: E0121 17:28:29.918778 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:28:44 crc kubenswrapper[4890]: I0121 17:28:44.914619 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:28:44 crc kubenswrapper[4890]: E0121 17:28:44.915480 4890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qnlzh_openshift-machine-config-operator(67047065-8bad-4e4d-8b91-47e7ee72ffb6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" Jan 21 17:28:59 crc kubenswrapper[4890]: I0121 17:28:59.914373 4890 scope.go:117] "RemoveContainer" containerID="2b03caf7d25495fa50f94edaa36087d927d3fda711eecce27b2b37c0dd4f8deb" Jan 21 17:29:00 crc kubenswrapper[4890]: I0121 17:29:00.492391 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" event={"ID":"67047065-8bad-4e4d-8b91-47e7ee72ffb6","Type":"ContainerStarted","Data":"43f782cbf0b953583207eb05e8baf75dfc25dc7a9ef1a4745028b493f7a31240"} Jan 21 17:29:52 crc kubenswrapper[4890]: I0121 17:29:52.895924 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-88gmr"] Jan 21 17:29:52 crc kubenswrapper[4890]: E0121 17:29:52.897570 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="extract-content" Jan 21 17:29:52 crc kubenswrapper[4890]: I0121 17:29:52.897590 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="extract-content" Jan 21 17:29:52 crc kubenswrapper[4890]: E0121 17:29:52.897616 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="extract-utilities" Jan 21 17:29:52 crc kubenswrapper[4890]: I0121 17:29:52.897625 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="extract-utilities" Jan 21 17:29:52 crc kubenswrapper[4890]: E0121 17:29:52.897654 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="registry-server" Jan 21 17:29:52 crc kubenswrapper[4890]: I0121 17:29:52.897664 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="registry-server" Jan 21 17:29:52 crc kubenswrapper[4890]: I0121 17:29:52.897889 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f430aa-d0aa-42ee-aff1-0f8845901dcb" containerName="registry-server" Jan 21 17:29:52 crc kubenswrapper[4890]: I0121 17:29:52.900075 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:52 crc kubenswrapper[4890]: I0121 17:29:52.936955 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88gmr"] Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.063609 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-utilities\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.063675 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-catalog-content\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.063705 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vd4j\" (UniqueName: \"kubernetes.io/projected/4f48dda0-f58d-4eb6-9cbf-96617542adc2-kube-api-access-8vd4j\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.165420 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-utilities\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.165480 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-catalog-content\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.165499 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vd4j\" (UniqueName: \"kubernetes.io/projected/4f48dda0-f58d-4eb6-9cbf-96617542adc2-kube-api-access-8vd4j\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.165908 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-utilities\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.166053 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-catalog-content\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.187574 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vd4j\" (UniqueName: \"kubernetes.io/projected/4f48dda0-f58d-4eb6-9cbf-96617542adc2-kube-api-access-8vd4j\") pod \"redhat-operators-88gmr\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.224795 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.722553 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88gmr"] Jan 21 17:29:53 crc kubenswrapper[4890]: I0121 17:29:53.934095 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gmr" event={"ID":"4f48dda0-f58d-4eb6-9cbf-96617542adc2","Type":"ContainerStarted","Data":"af5cb857e36563a657545e7581d391f5ac4f18fdfadcfc9c5d674b721c752c29"} Jan 21 17:29:54 crc kubenswrapper[4890]: I0121 17:29:54.941684 4890 generic.go:334] "Generic (PLEG): container finished" podID="4f48dda0-f58d-4eb6-9cbf-96617542adc2" containerID="db4f8fad69148d02e62acce54a60ecafa5f9059b9b8cab4bc76d323e05b065c9" exitCode=0 Jan 21 17:29:54 crc kubenswrapper[4890]: I0121 17:29:54.941736 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gmr" event={"ID":"4f48dda0-f58d-4eb6-9cbf-96617542adc2","Type":"ContainerDied","Data":"db4f8fad69148d02e62acce54a60ecafa5f9059b9b8cab4bc76d323e05b065c9"} Jan 21 17:29:54 crc kubenswrapper[4890]: I0121 17:29:54.943747 4890 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:29:55 crc kubenswrapper[4890]: I0121 17:29:55.950237 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gmr" event={"ID":"4f48dda0-f58d-4eb6-9cbf-96617542adc2","Type":"ContainerStarted","Data":"51f99fd3889de11f1605ecfab2dfdb5200d5d2af4c74e690a58f491d2f5bf87c"} Jan 21 17:29:56 crc kubenswrapper[4890]: I0121 17:29:56.959330 4890 generic.go:334] "Generic (PLEG): container finished" podID="4f48dda0-f58d-4eb6-9cbf-96617542adc2" containerID="51f99fd3889de11f1605ecfab2dfdb5200d5d2af4c74e690a58f491d2f5bf87c" exitCode=0 Jan 21 17:29:56 crc kubenswrapper[4890]: I0121 17:29:56.959417 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gmr" event={"ID":"4f48dda0-f58d-4eb6-9cbf-96617542adc2","Type":"ContainerDied","Data":"51f99fd3889de11f1605ecfab2dfdb5200d5d2af4c74e690a58f491d2f5bf87c"} Jan 21 17:29:57 crc kubenswrapper[4890]: I0121 17:29:57.968857 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gmr" event={"ID":"4f48dda0-f58d-4eb6-9cbf-96617542adc2","Type":"ContainerStarted","Data":"1055366c46a27f85e2c6ba3e7cda3e7d2c0f8d3d7767a40827fc080f1f68c23d"} Jan 21 17:29:57 crc kubenswrapper[4890]: I0121 17:29:57.996704 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-88gmr" podStartSLOduration=3.619516249 podStartE2EDuration="5.996686771s" podCreationTimestamp="2026-01-21 17:29:52 +0000 UTC" firstStartedPulling="2026-01-21 17:29:54.943488095 +0000 UTC m=+7077.304930494" lastFinishedPulling="2026-01-21 17:29:57.320658617 +0000 UTC m=+7079.682101016" observedRunningTime="2026-01-21 17:29:57.986380416 +0000 UTC m=+7080.347822825" watchObservedRunningTime="2026-01-21 17:29:57.996686771 +0000 UTC m=+7080.358129180" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.147979 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx"] Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.149610 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.152560 4890 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.153069 4890 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.166793 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx"] Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.300698 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs265\" (UniqueName: \"kubernetes.io/projected/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-kube-api-access-rs265\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.300805 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-config-volume\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.300898 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-secret-volume\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.403566 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-secret-volume\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.403682 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs265\" (UniqueName: \"kubernetes.io/projected/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-kube-api-access-rs265\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.403772 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-config-volume\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.405114 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-config-volume\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.410048 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-secret-volume\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.425923 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs265\" (UniqueName: \"kubernetes.io/projected/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-kube-api-access-rs265\") pod \"collect-profiles-29483610-bvdfx\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:00 crc kubenswrapper[4890]: I0121 17:30:00.511939 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:01 crc kubenswrapper[4890]: I0121 17:30:01.031150 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx"] Jan 21 17:30:02 crc kubenswrapper[4890]: I0121 17:30:02.003704 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" event={"ID":"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d","Type":"ContainerStarted","Data":"c1269bad5c88b29b0b54947c44a9792efd6acbbd8ce2de69c6dcf2149d128063"} Jan 21 17:30:02 crc kubenswrapper[4890]: I0121 17:30:02.004395 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" event={"ID":"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d","Type":"ContainerStarted","Data":"4647e06f0eb905191fbfcb59d9a0a7cc4db9a707ad5617da4af61fc01235833f"} Jan 21 17:30:02 crc kubenswrapper[4890]: I0121 17:30:02.035628 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" podStartSLOduration=2.035603028 podStartE2EDuration="2.035603028s" podCreationTimestamp="2026-01-21 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:30:02.026680308 +0000 UTC m=+7084.388122717" watchObservedRunningTime="2026-01-21 17:30:02.035603028 +0000 UTC m=+7084.397045447" Jan 21 17:30:03 crc kubenswrapper[4890]: I0121 17:30:03.225248 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:30:03 crc kubenswrapper[4890]: I0121 17:30:03.225309 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:30:04 crc kubenswrapper[4890]: I0121 17:30:04.018897 4890 generic.go:334] "Generic (PLEG): container finished" podID="3f65a7ae-e2b9-49d9-8bc1-60bdc727686d" containerID="c1269bad5c88b29b0b54947c44a9792efd6acbbd8ce2de69c6dcf2149d128063" exitCode=0 Jan 21 17:30:04 crc kubenswrapper[4890]: I0121 17:30:04.018972 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" event={"ID":"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d","Type":"ContainerDied","Data":"c1269bad5c88b29b0b54947c44a9792efd6acbbd8ce2de69c6dcf2149d128063"} Jan 21 17:30:04 crc kubenswrapper[4890]: I0121 17:30:04.281749 4890 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-88gmr" podUID="4f48dda0-f58d-4eb6-9cbf-96617542adc2" containerName="registry-server" probeResult="failure" output=< Jan 21 17:30:04 crc kubenswrapper[4890]: timeout: failed to connect service ":50051" within 1s Jan 21 17:30:04 crc kubenswrapper[4890]: > Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.330044 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.389070 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-config-volume\") pod \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.389169 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs265\" (UniqueName: \"kubernetes.io/projected/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-kube-api-access-rs265\") pod \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.389395 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-secret-volume\") pod \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\" (UID: \"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d\") " Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.391598 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f65a7ae-e2b9-49d9-8bc1-60bdc727686d" (UID: "3f65a7ae-e2b9-49d9-8bc1-60bdc727686d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.399553 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f65a7ae-e2b9-49d9-8bc1-60bdc727686d" (UID: "3f65a7ae-e2b9-49d9-8bc1-60bdc727686d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.446942 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-kube-api-access-rs265" (OuterVolumeSpecName: "kube-api-access-rs265") pod "3f65a7ae-e2b9-49d9-8bc1-60bdc727686d" (UID: "3f65a7ae-e2b9-49d9-8bc1-60bdc727686d"). InnerVolumeSpecName "kube-api-access-rs265". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.492525 4890 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.492576 4890 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:05 crc kubenswrapper[4890]: I0121 17:30:05.492593 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs265\" (UniqueName: \"kubernetes.io/projected/3f65a7ae-e2b9-49d9-8bc1-60bdc727686d-kube-api-access-rs265\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:06 crc kubenswrapper[4890]: I0121 17:30:06.045868 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" event={"ID":"3f65a7ae-e2b9-49d9-8bc1-60bdc727686d","Type":"ContainerDied","Data":"4647e06f0eb905191fbfcb59d9a0a7cc4db9a707ad5617da4af61fc01235833f"} Jan 21 17:30:06 crc kubenswrapper[4890]: I0121 17:30:06.045930 4890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4647e06f0eb905191fbfcb59d9a0a7cc4db9a707ad5617da4af61fc01235833f" Jan 21 17:30:06 crc kubenswrapper[4890]: I0121 17:30:06.046000 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-bvdfx" Jan 21 17:30:06 crc kubenswrapper[4890]: I0121 17:30:06.439827 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6"] Jan 21 17:30:06 crc kubenswrapper[4890]: I0121 17:30:06.448342 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-kmwl6"] Jan 21 17:30:07 crc kubenswrapper[4890]: I0121 17:30:07.925620 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fc0f85a-930c-4300-9b6f-e45536fb511e" path="/var/lib/kubelet/pods/1fc0f85a-930c-4300-9b6f-e45536fb511e/volumes" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.597931 4890 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ctzf4"] Jan 21 17:30:10 crc kubenswrapper[4890]: E0121 17:30:10.598888 4890 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f65a7ae-e2b9-49d9-8bc1-60bdc727686d" containerName="collect-profiles" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.598902 4890 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f65a7ae-e2b9-49d9-8bc1-60bdc727686d" containerName="collect-profiles" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.605579 4890 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f65a7ae-e2b9-49d9-8bc1-60bdc727686d" containerName="collect-profiles" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.607630 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.623116 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctzf4"] Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.719313 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghfn\" (UniqueName: \"kubernetes.io/projected/9700231e-6d7e-4e21-8c84-0944d9332e70-kube-api-access-xghfn\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.719652 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-utilities\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.719754 4890 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-catalog-content\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.821349 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xghfn\" (UniqueName: \"kubernetes.io/projected/9700231e-6d7e-4e21-8c84-0944d9332e70-kube-api-access-xghfn\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.821474 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-utilities\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.821511 4890 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-catalog-content\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.822023 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-catalog-content\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.822288 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-utilities\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.841806 4890 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xghfn\" (UniqueName: \"kubernetes.io/projected/9700231e-6d7e-4e21-8c84-0944d9332e70-kube-api-access-xghfn\") pod \"redhat-marketplace-ctzf4\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:10 crc kubenswrapper[4890]: I0121 17:30:10.945880 4890 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:11 crc kubenswrapper[4890]: I0121 17:30:11.437736 4890 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctzf4"] Jan 21 17:30:12 crc kubenswrapper[4890]: I0121 17:30:12.093214 4890 generic.go:334] "Generic (PLEG): container finished" podID="9700231e-6d7e-4e21-8c84-0944d9332e70" containerID="a0f88296e915bb5ffe72dd578a15c8e846967e4a9897d40c3b1e960d0256315d" exitCode=0 Jan 21 17:30:12 crc kubenswrapper[4890]: I0121 17:30:12.093268 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctzf4" event={"ID":"9700231e-6d7e-4e21-8c84-0944d9332e70","Type":"ContainerDied","Data":"a0f88296e915bb5ffe72dd578a15c8e846967e4a9897d40c3b1e960d0256315d"} Jan 21 17:30:12 crc kubenswrapper[4890]: I0121 17:30:12.093315 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctzf4" event={"ID":"9700231e-6d7e-4e21-8c84-0944d9332e70","Type":"ContainerStarted","Data":"3f265126cad41d72749aa4b856b577977272561cc2d180818289f09ccc23bc1c"} Jan 21 17:30:13 crc kubenswrapper[4890]: I0121 17:30:13.273142 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:30:13 crc kubenswrapper[4890]: I0121 17:30:13.323248 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:30:14 crc kubenswrapper[4890]: I0121 17:30:14.110548 4890 generic.go:334] "Generic (PLEG): container finished" podID="9700231e-6d7e-4e21-8c84-0944d9332e70" containerID="77be08f953dd50158ef86a68f1b9ab336ae21a01a24e1c098d6e15e69c794150" exitCode=0 Jan 21 17:30:14 crc kubenswrapper[4890]: I0121 17:30:14.110620 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctzf4" event={"ID":"9700231e-6d7e-4e21-8c84-0944d9332e70","Type":"ContainerDied","Data":"77be08f953dd50158ef86a68f1b9ab336ae21a01a24e1c098d6e15e69c794150"} Jan 21 17:30:15 crc kubenswrapper[4890]: I0121 17:30:15.125418 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctzf4" event={"ID":"9700231e-6d7e-4e21-8c84-0944d9332e70","Type":"ContainerStarted","Data":"4041bbf49c73e836e131c7251693161c48d551ccb60a1299df5ceb122d46fa79"} Jan 21 17:30:15 crc kubenswrapper[4890]: I0121 17:30:15.160584 4890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ctzf4" podStartSLOduration=2.773818758 podStartE2EDuration="5.160564616s" podCreationTimestamp="2026-01-21 17:30:10 +0000 UTC" firstStartedPulling="2026-01-21 17:30:12.097735983 +0000 UTC m=+7094.459178392" lastFinishedPulling="2026-01-21 17:30:14.484481841 +0000 UTC m=+7096.845924250" observedRunningTime="2026-01-21 17:30:15.154898536 +0000 UTC m=+7097.516341015" watchObservedRunningTime="2026-01-21 17:30:15.160564616 +0000 UTC m=+7097.522007025" Jan 21 17:30:15 crc kubenswrapper[4890]: I0121 17:30:15.576622 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88gmr"] Jan 21 17:30:15 crc kubenswrapper[4890]: I0121 17:30:15.576919 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-88gmr" podUID="4f48dda0-f58d-4eb6-9cbf-96617542adc2" containerName="registry-server" containerID="cri-o://1055366c46a27f85e2c6ba3e7cda3e7d2c0f8d3d7767a40827fc080f1f68c23d" gracePeriod=2 Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.134140 4890 generic.go:334] "Generic (PLEG): container finished" podID="4f48dda0-f58d-4eb6-9cbf-96617542adc2" containerID="1055366c46a27f85e2c6ba3e7cda3e7d2c0f8d3d7767a40827fc080f1f68c23d" exitCode=0 Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.135800 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gmr" event={"ID":"4f48dda0-f58d-4eb6-9cbf-96617542adc2","Type":"ContainerDied","Data":"1055366c46a27f85e2c6ba3e7cda3e7d2c0f8d3d7767a40827fc080f1f68c23d"} Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.563482 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.754525 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vd4j\" (UniqueName: \"kubernetes.io/projected/4f48dda0-f58d-4eb6-9cbf-96617542adc2-kube-api-access-8vd4j\") pod \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.754613 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-catalog-content\") pod \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.754697 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-utilities\") pod \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\" (UID: \"4f48dda0-f58d-4eb6-9cbf-96617542adc2\") " Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.755813 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-utilities" (OuterVolumeSpecName: "utilities") pod "4f48dda0-f58d-4eb6-9cbf-96617542adc2" (UID: "4f48dda0-f58d-4eb6-9cbf-96617542adc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.760659 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f48dda0-f58d-4eb6-9cbf-96617542adc2-kube-api-access-8vd4j" (OuterVolumeSpecName: "kube-api-access-8vd4j") pod "4f48dda0-f58d-4eb6-9cbf-96617542adc2" (UID: "4f48dda0-f58d-4eb6-9cbf-96617542adc2"). InnerVolumeSpecName "kube-api-access-8vd4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.858601 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vd4j\" (UniqueName: \"kubernetes.io/projected/4f48dda0-f58d-4eb6-9cbf-96617542adc2-kube-api-access-8vd4j\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.858911 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.887160 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f48dda0-f58d-4eb6-9cbf-96617542adc2" (UID: "4f48dda0-f58d-4eb6-9cbf-96617542adc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:30:16 crc kubenswrapper[4890]: I0121 17:30:16.961144 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f48dda0-f58d-4eb6-9cbf-96617542adc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.144290 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gmr" event={"ID":"4f48dda0-f58d-4eb6-9cbf-96617542adc2","Type":"ContainerDied","Data":"af5cb857e36563a657545e7581d391f5ac4f18fdfadcfc9c5d674b721c752c29"} Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.144339 4890 scope.go:117] "RemoveContainer" containerID="1055366c46a27f85e2c6ba3e7cda3e7d2c0f8d3d7767a40827fc080f1f68c23d" Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.144394 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gmr" Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.180317 4890 scope.go:117] "RemoveContainer" containerID="51f99fd3889de11f1605ecfab2dfdb5200d5d2af4c74e690a58f491d2f5bf87c" Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.190936 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88gmr"] Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.198833 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-88gmr"] Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.207787 4890 scope.go:117] "RemoveContainer" containerID="db4f8fad69148d02e62acce54a60ecafa5f9059b9b8cab4bc76d323e05b065c9" Jan 21 17:30:17 crc kubenswrapper[4890]: I0121 17:30:17.929821 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f48dda0-f58d-4eb6-9cbf-96617542adc2" path="/var/lib/kubelet/pods/4f48dda0-f58d-4eb6-9cbf-96617542adc2/volumes" Jan 21 17:30:19 crc kubenswrapper[4890]: I0121 17:30:19.975575 4890 scope.go:117] "RemoveContainer" containerID="4dd405ce2f3c7acfa49117f003f3a1c580bb4dca76a74bda8c1492791303336f" Jan 21 17:30:20 crc kubenswrapper[4890]: I0121 17:30:20.946575 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:20 crc kubenswrapper[4890]: I0121 17:30:20.947194 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:21 crc kubenswrapper[4890]: I0121 17:30:21.006475 4890 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:21 crc kubenswrapper[4890]: I0121 17:30:21.246695 4890 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:21 crc kubenswrapper[4890]: I0121 17:30:21.300512 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctzf4"] Jan 21 17:30:23 crc kubenswrapper[4890]: I0121 17:30:23.201266 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ctzf4" podUID="9700231e-6d7e-4e21-8c84-0944d9332e70" containerName="registry-server" containerID="cri-o://4041bbf49c73e836e131c7251693161c48d551ccb60a1299df5ceb122d46fa79" gracePeriod=2 Jan 21 17:30:24 crc kubenswrapper[4890]: I0121 17:30:24.209376 4890 generic.go:334] "Generic (PLEG): container finished" podID="9700231e-6d7e-4e21-8c84-0944d9332e70" containerID="4041bbf49c73e836e131c7251693161c48d551ccb60a1299df5ceb122d46fa79" exitCode=0 Jan 21 17:30:24 crc kubenswrapper[4890]: I0121 17:30:24.209456 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctzf4" event={"ID":"9700231e-6d7e-4e21-8c84-0944d9332e70","Type":"ContainerDied","Data":"4041bbf49c73e836e131c7251693161c48d551ccb60a1299df5ceb122d46fa79"} Jan 21 17:30:26 crc kubenswrapper[4890]: I0121 17:30:26.875879 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.042288 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-utilities\") pod \"9700231e-6d7e-4e21-8c84-0944d9332e70\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.042391 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-catalog-content\") pod \"9700231e-6d7e-4e21-8c84-0944d9332e70\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.042426 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xghfn\" (UniqueName: \"kubernetes.io/projected/9700231e-6d7e-4e21-8c84-0944d9332e70-kube-api-access-xghfn\") pod \"9700231e-6d7e-4e21-8c84-0944d9332e70\" (UID: \"9700231e-6d7e-4e21-8c84-0944d9332e70\") " Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.044084 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-utilities" (OuterVolumeSpecName: "utilities") pod "9700231e-6d7e-4e21-8c84-0944d9332e70" (UID: "9700231e-6d7e-4e21-8c84-0944d9332e70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.048553 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9700231e-6d7e-4e21-8c84-0944d9332e70-kube-api-access-xghfn" (OuterVolumeSpecName: "kube-api-access-xghfn") pod "9700231e-6d7e-4e21-8c84-0944d9332e70" (UID: "9700231e-6d7e-4e21-8c84-0944d9332e70"). InnerVolumeSpecName "kube-api-access-xghfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.069328 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9700231e-6d7e-4e21-8c84-0944d9332e70" (UID: "9700231e-6d7e-4e21-8c84-0944d9332e70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.143847 4890 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.143879 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xghfn\" (UniqueName: \"kubernetes.io/projected/9700231e-6d7e-4e21-8c84-0944d9332e70-kube-api-access-xghfn\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.143891 4890 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700231e-6d7e-4e21-8c84-0944d9332e70-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.233151 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctzf4" event={"ID":"9700231e-6d7e-4e21-8c84-0944d9332e70","Type":"ContainerDied","Data":"3f265126cad41d72749aa4b856b577977272561cc2d180818289f09ccc23bc1c"} Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.233218 4890 scope.go:117] "RemoveContainer" containerID="4041bbf49c73e836e131c7251693161c48d551ccb60a1299df5ceb122d46fa79" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.233248 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctzf4" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.266566 4890 scope.go:117] "RemoveContainer" containerID="77be08f953dd50158ef86a68f1b9ab336ae21a01a24e1c098d6e15e69c794150" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.271158 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctzf4"] Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.277914 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctzf4"] Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.287177 4890 scope.go:117] "RemoveContainer" containerID="a0f88296e915bb5ffe72dd578a15c8e846967e4a9897d40c3b1e960d0256315d" Jan 21 17:30:27 crc kubenswrapper[4890]: I0121 17:30:27.925002 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9700231e-6d7e-4e21-8c84-0944d9332e70" path="/var/lib/kubelet/pods/9700231e-6d7e-4e21-8c84-0944d9332e70/volumes" Jan 21 17:30:58 crc kubenswrapper[4890]: I0121 17:30:58.503525 4890 generic.go:334] "Generic (PLEG): container finished" podID="98162317-c717-433e-b658-258cfb11a204" containerID="b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e" exitCode=0 Jan 21 17:30:58 crc kubenswrapper[4890]: I0121 17:30:58.503727 4890 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcqzq/must-gather-grqs4" event={"ID":"98162317-c717-433e-b658-258cfb11a204","Type":"ContainerDied","Data":"b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e"} Jan 21 17:30:58 crc kubenswrapper[4890]: I0121 17:30:58.504880 4890 scope.go:117] "RemoveContainer" containerID="b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e" Jan 21 17:30:58 crc kubenswrapper[4890]: I0121 17:30:58.751152 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcqzq_must-gather-grqs4_98162317-c717-433e-b658-258cfb11a204/gather/0.log" Jan 21 17:31:06 crc kubenswrapper[4890]: I0121 17:31:06.617172 4890 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qcqzq/must-gather-grqs4"] Jan 21 17:31:06 crc kubenswrapper[4890]: I0121 17:31:06.618133 4890 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qcqzq/must-gather-grqs4" podUID="98162317-c717-433e-b658-258cfb11a204" containerName="copy" containerID="cri-o://b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654" gracePeriod=2 Jan 21 17:31:06 crc kubenswrapper[4890]: I0121 17:31:06.623952 4890 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qcqzq/must-gather-grqs4"] Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.097022 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcqzq_must-gather-grqs4_98162317-c717-433e-b658-258cfb11a204/copy/0.log" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.097756 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.166795 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98162317-c717-433e-b658-258cfb11a204-must-gather-output\") pod \"98162317-c717-433e-b658-258cfb11a204\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.166877 4890 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snbnx\" (UniqueName: \"kubernetes.io/projected/98162317-c717-433e-b658-258cfb11a204-kube-api-access-snbnx\") pod \"98162317-c717-433e-b658-258cfb11a204\" (UID: \"98162317-c717-433e-b658-258cfb11a204\") " Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.173058 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98162317-c717-433e-b658-258cfb11a204-kube-api-access-snbnx" (OuterVolumeSpecName: "kube-api-access-snbnx") pod "98162317-c717-433e-b658-258cfb11a204" (UID: "98162317-c717-433e-b658-258cfb11a204"). InnerVolumeSpecName "kube-api-access-snbnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.268300 4890 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snbnx\" (UniqueName: \"kubernetes.io/projected/98162317-c717-433e-b658-258cfb11a204-kube-api-access-snbnx\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.314608 4890 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98162317-c717-433e-b658-258cfb11a204-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "98162317-c717-433e-b658-258cfb11a204" (UID: "98162317-c717-433e-b658-258cfb11a204"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.369555 4890 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98162317-c717-433e-b658-258cfb11a204-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.575516 4890 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcqzq_must-gather-grqs4_98162317-c717-433e-b658-258cfb11a204/copy/0.log" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.575809 4890 generic.go:334] "Generic (PLEG): container finished" podID="98162317-c717-433e-b658-258cfb11a204" containerID="b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654" exitCode=143 Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.575855 4890 scope.go:117] "RemoveContainer" containerID="b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.575865 4890 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcqzq/must-gather-grqs4" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.605554 4890 scope.go:117] "RemoveContainer" containerID="b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.667486 4890 scope.go:117] "RemoveContainer" containerID="b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654" Jan 21 17:31:07 crc kubenswrapper[4890]: E0121 17:31:07.668286 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654\": container with ID starting with b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654 not found: ID does not exist" containerID="b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.668331 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654"} err="failed to get container status \"b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654\": rpc error: code = NotFound desc = could not find container \"b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654\": container with ID starting with b45b47e861de927a9e4fb2aaffab4e6f38105047b14cd71332be1d221ebef654 not found: ID does not exist" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.668382 4890 scope.go:117] "RemoveContainer" containerID="b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e" Jan 21 17:31:07 crc kubenswrapper[4890]: E0121 17:31:07.668855 4890 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e\": container with ID starting with b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e not found: ID does not exist" containerID="b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.668891 4890 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e"} err="failed to get container status \"b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e\": rpc error: code = NotFound desc = could not find container \"b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e\": container with ID starting with b919aaf5f69e06c7a36a2cbfa0485106696cc14c751c387299d7b17e9a60171e not found: ID does not exist" Jan 21 17:31:07 crc kubenswrapper[4890]: I0121 17:31:07.925669 4890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98162317-c717-433e-b658-258cfb11a204" path="/var/lib/kubelet/pods/98162317-c717-433e-b658-258cfb11a204/volumes" Jan 21 17:31:18 crc kubenswrapper[4890]: I0121 17:31:18.762269 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:31:18 crc kubenswrapper[4890]: I0121 17:31:18.762905 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:31:48 crc kubenswrapper[4890]: I0121 17:31:48.762414 4890 patch_prober.go:28] interesting pod/machine-config-daemon-qnlzh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:31:48 crc kubenswrapper[4890]: I0121 17:31:48.763009 4890 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qnlzh" podUID="67047065-8bad-4e4d-8b91-47e7ee72ffb6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"